A Privacy Torts Solution to Postmortem Deepfakes


In 2021, Road Runner, a documentary about late celebrity chef Anthony Bourdain, became embroiled in controversy for using AI-generated voice technology to create a voiceover of Bourdain reading an email which he wrote but never spoke aloud.[1] In response to the director’s claim of having received a blessing from Bourdain’s loved ones to do this, Bourdain’s ex-wife tweeted, “I certainly was NOT the one who said Tony would have been cool with that.”[2] The Road Runner controversy highlights the murky ethics of postmortem deepfakes. There is the question of whether the Bourdain deepfake caused any harm, and if it did, whether it harmed the deceased chef or his surviving ex-wife. Bourdain may have never said the words aloud, but they were his words. However, Bourdain’s ex-wife, was still upset on Bourdain’s behalf because he did not consent to the deepfake. At its core, the question is what care do the living owe to the dead in a technological age where deepfakes can depict the dead doing or saying things they never did?

The answer to this question cannot be found in current privacy law, which does not provide redress for dignitary harms caused by deepfakes of the deceased. As it stands, deepfakes are primarily regulated by state laws through the right of publicity.[3] However, because the right of publicity only concerns commercial injury, redress is limited to persons who can demonstrate commercial value for their image or likeness (i.e., celebrities). At the same time, deepfakes have become increasingly real-looking,[4] easy to create,[5] and shareable to the entire world for free via social media.[6] The danger of deepfakes has crept into the lives of average persons, yet the law unfairly limits legal redress to celebrities.

While the law could provide redress for mental and emotional harms caused by deepfakes, privacy torts exclude the deceased. Whereas the right of publicity is based in property rights, which makes the right descendible after death, privacy torts are based in mental or emotional harms, which do not survive death.[7] What’s more, these mental and emotional injuries do not include a conception of human dignity, which goes to the heart of the injury caused by postmortem deepfakes—a violation of identity, personhood, and control over one’s legacy.[8] This Note addresses possible solutions to the novel issue of redressability for postmortem deepfakes based on existing privacy torts, arguing a new exception is needed to protect the memory of the deceased, on behalf of both the living and the dead.

I. The Problem of Postmortem Deepfakes

In December 2017, popular tech blog Motherboard detailed a new phenomenon sweeping Reddit: deepfakes.[9] An eponymous Redditor[10] posted a pornographic video with Gal Gadot’s face superimposed on the performer’s body.[11] Soon, the subreddit r/deepfakes was created, amassing 90,000 followers[12] and hosting similar pornographic videos with face-swapped female celebrities such as Taylor Swift, Daisy Ridley, and Maisie Williams.[13] Another redditor, user deepfakeapp, even developed an online application called FakeApp to assist redditors in creating deepfake videos from personal data sets using deepfakesapp’s algorithm.[14]

While the “fake” in deepfake belies these videos’ inauthenticity, the technology’s purpose is to appear real. The “deep” component of deepfake references “deep learning,” the machine learning phenomenon wherein computers utilize layered neural networks to enhance their own algorithms.[15] Through interconnected processes in the computer’s architecture that mimic neural connections in the human brain, a computer is able to improve its own performance.[16] Artificial intelligence programs which produce deepfakes are called Generative Adversarial Networks (GANs).[17] Ian Goodfellow and other researchers at the University of Montreal developed GANs by using two complimentary algorithms.[18] The first algorithm is the “generator,” responsible for creating content.[19] The second algorithm is the “discriminator,” tasked with scrutinizing the content for authenticity.[20] If the discriminator can tell the content is fake, then the generator algorithm refines the content until it presents as sufficiently real to fool the discriminator.[21] This machine learning trains the computer to seamlessly alter each frame of a video[22] so that it looks real upon playback.[23] The end result is a deepfake: a depiction of something that never actually happened, but looks like it did.

Indeed, rapid advancements in deepfake algorithms have resulted in deepfakes which are effectively indistinguishable from reality. A recent study found participants were able to discern real faces from fake ones at a rate no more accurate than a coin toss, and even rating GAN-generated faces higher in trustworthiness than real human faces.[24] Another study found that not only do people fail to reliably detect deepfakes, but they also show bias towards finding deepfakes to be authentic while simultaneously overestimating their ability to detect deepfakes.[25]

While the deepfake landscape is dominated by pornography, deepfakes have invaded many other cultural contexts.[26] The popular app TikTok is a hotbed for celebrity deepfakes[27] and deepfake memes.[28] Deepfakes have also been used for dueling political purposes: sometimes spreading awareness of the dangers of deepfakes for democracy,[29] while at other times interfering with democratic elections.[30] Recently, a deepfake of Ukrainian President Volodymyr Zelenskyy telling his soldiers to surrender to invading Russian forces circulated social media.[31] Audio deepfakes have also been used to carry out two large bank heists. [32]

Deepfake technology has even been used to resurrect the dead. MIT professors produced a deepfake depicting an alternate history wherein Richard Nixon delivered his “In the Event of Moon Disaster” speech following the failure of the 1969 Apollo moon landing.[33] One genealogy company is offering consumers a new feature called “Deep Nostalgia,” which is able to animate photographs of dead relatives within seconds.[34] Public figures have used deepfake videos of deceased loved ones to create personal gifts, like one created by musician Kanye West to give to his then-wife, media personality Kim Kardashian, for her fortieth birthday.[35] Deepfakes of dead relatives have also been utilized to promote political causes. Parents of 2018 Parkland shooting victim Joaquin Oliver created a deepfake of their son urging voters to support gun control in the 2020 election: “I mean, vote for me. Because I can’t.”[36]

However, deepfakes of the deceased have been employed most prominently in the entertainment industry through retroactive recreation.[37] Retroactive recreation generally depicts actors who die mid-production by superimposing old footage of the actor’s face onto a body double through CGI during post-production.[38] A deepfake of James Dean, who died in 1955, stars in the upcoming film Finding Jack.[39] Peter Cushing, who died in 1994, was featured via deepfake in 2019’s Rogue One: A Star Wars Story.[40] However, the use of deepfakes of dead actors in films remains controversial. Some film critics lambasted Disney’s use of deepfake technology to feature Peter Cushing, calling it “a digital indignity”[41] and “jarringly discomfiting.”[42]

Posthumous deepfakes are becoming increasingly prevalent in not only entertainment, but in the lives of ordinary people. This new technology is becoming more difficult to distinguish from reality, thus blurring the lines between truths and falsehoods for those who no longer have a voice to defend themselves. Because one’s legacy is an important part of identity and personhood, the law should be able to provide redress for injuries inflicted by postmortem deepfakes.

Currently, a deceased individual’s estate may only litigate commercial injuries through the right of publicity. Privacy law generally permits actionability for three types of injuries: commercial harm in the right of publicity, mental and emotional harm in the privacy torts, and dignitary injury in the European model.[43] There is no legal redress for postmortem mental, emotional, or dignitary injuries because only living persons experience those things.[44] Relatives are also foreclosed from suing for invasions of privacy on behalf of the deceased.[45] Even though a loved one may be humiliated, aghast, or distraught after viewing a deepfake of the deceased, they have no actionable claim.[46] Additionally, loved ones may not sue to restore the tarnished reputation of a dead relative.[47] This is called the no relational right rule, which states that privacy suits may not be asserted by proxy.[48]

However, the no relational right rule has a few exceptions. Relatives may sue for disclosure of photos of the deceased’s body due to the outrageousness of such conduct.[49] Courts have emphasized that this exemption is to protect the privacy of living relatives rather than that of the deceased.[50] Florida has also conferred statutory protection against the release of autopsy photos to non-family members without good cause.[51] Because the exceptions to the no relational right rule are extremely limited, most postmortem would-be privacy claims are creatively repackaged as right of publicity claims in order to afford relief to the deceased’s relatives.[52] In the same way, postmortem deepfake litigation has been shoehorned into the right of publicity because privacy interests are unfairly limited to living individuals.

II. A Privacy Torts Approach to Postmortem Deepfakes

A. The Right of Publicity

Current legal protections surrounding deepfakes largely come from the right of publicity. The right of publicity at common law protects against another’s appropriation for his own benefit of one’s likeness without consent.[53] The Third Restatement of Unfair Competition reworked the right of publicity to protect against another’s use of one’s identity or persona without consent in such a way that one is identifiable from the use and the use caused damage to the commercial value of one’s identity.[54]

The right of publicity[55] originated in privacy rights from Samuel Warren and Louis D. Brandeis’s seminal article, The Right to Privacy.[56] Twelve years later, the New York Court of Appeals rejected the common law right to privacy in Roberson v. Rochester Folding Box, wherein a teenage girl sued a flour company for using her image in their advertising without her consent.[57] In response, the New York legislature created a limited statutory right to privacy in 1903,[58] the first privacy statute in the United States.[59]

However, the right of publicity’s basis shifted from a justification found in privacy to one in property following Melville B. Nimmer’s article, The Right of Publicity.[60] Nimmer contended that the right of publicity ought to be founded in property rights because the injury is based on the commercial injury of the unpermitted use of one’s likeness; in contrast, privacy rights are based on the mental or emotional harms caused by the unpermitted use.[61] This distinction is significant because property rights survive death, whereas privacy rights end when the person whose privacy was invaded dies.[62] In the only case where the Supreme Court has considered the right of publicity, the Court upheld the property-based conception.[63]

There is no federal statutory right of publicity,[64] so enforcement is relegated to state common law and statutes surrounding the right. Today, thirty-three states recognize the statutory right of publicity.[65] This piecemeal approach to enforcing the right of publicity has resulted in much confusion surrounding the right, as the right can vary significantly from jurisdiction to jurisdiction.[66]

The right of publicity has been likened to the “Wild West” due to increasing litigation coupled with a lack of legal standardization surrounding the right.[67] Indeed, the bounds of the right of publicity remain hazy; the tort proscribes conduct without defining the specific harm it seeks to redress.[68] In some states, plaintiffs must demonstrate economic injury, and in others, personal injury is also recognized.[69] However, in some states it remains unclear as to whether one needs to have a commercially-valuable identity while living to have a postmortem right of publicity claim.[70] Additionally, right of publicity statutes, like those in Washington[71] and Hawaii,[72] feature “all comers” provisions, which allow estates to bring in-state claims for those who were domiciled out of state upon death. These provisions have been criticized for creating an imbalance in the right of publicity and encouraging heirs to litigate claims in specific states (i.e., “forum shop”) where they can commodify the postmortem personas of their relatives.[73]

Scholars Robert Post and Jennifer Rothman have suggested redefining the right of publicity into four categories[74] based on plaintiffs’ interests: the right of performance, the right of commercial value, the right of control, and the right of dignity.[75] Although Post and Rothman’s reorganization of the right of publicity provides a helpful framework for considering what interests the right is designed to protect, they explicitly state their framework is not intended to apply to postmortem rights.[76] Ultimately, the common law right of publicity is in disarray and does not provide helpful guidance for dealing with a postmortem right of publicity.

However, the statutory right of publicity may provide an avenue for modernizing and standardizing the right. Indeed, the state where the right of publicity originated recently enacted the first statute extending the right of publicity to postmortem deepfakes.[77] In 2020, New York unanimously passed an amended right of publicity with Senate Bill S5959D.[78] S5959D modernized the state’s right of publicity law by expanding the right to cover postmortem claims and digital replicas[79] and addressing the dissemination of sexually explicit materials. S5959D holds liable any person who uses a deceased person’s likeness for commercial purposes without prior consent[80] and explicitly recognizes the right of publicity as a property right.[81] The law exempts the entertainment industry from liability for depicting a deceased performer using deepfakes in scripted works as long as there is consent.[82] S5959D also limits postmortem right of publicity claims to forty years after the death of the allegedly injured individual and only governs those individuals domiciled in New York at the time of death.[83]

In many ways, S5959D represents how the right of publicity could evolve to address new injuries posed by modern technology. By adding a postmortem right of publicity that covers the use of digital replicas, S5959D provides a right of action for deepfakes of the deceased. The statute also likely skirts First Amendment chilling-effect concerns by balancing exceptions for the film industry and other artistic, literary, or political works with protections of the deceased through the consent requirement.[84]

Although S5959D modernized New York’s right of publicity, its shortcomings mirror those of the common law right of publicity as a tool to remedy the harms caused by postmortem deepfakes. First, the right is too limited in scope. The commercial purposes requirement effectively limits claims to those by celebrities and public figures—those whose images have commercial value. However, deepfake technology has become accessible to anyone with a smart phone.[85] The ubiquity of deepfake technology means the average person is right to be concerned about postmortem deepfakes of themselves or loved ones.[86] But postmortem deepfakes of average persons are unlikely to fulfill the commercial purpose requirement since their identities would not possess independent commercial value, meaning those claims could not be litigated.

Moreover, the S5959D disclaimer exception may render the right of publicity toothless. The statute exempts from liability those who use a deceased performer’s digital replica if it features a “conspicuous disclaimer” in the credits or in related advertisements stating the use was unauthorized.[87] Adding a disclaimer will cost little to nothing for those employing digital replicas, meaning the provision will likely not deter any use of postmortem deepfakes. Additionally, the statute’s “deceased performer” definition remains vague. The statute defines a “deceased performer” as one domiciled in New York upon death who “for gain or livelihood was regularly engaged in acting, singing, dancing, or playing a musical instrument.”[88] Rothman criticized this definition as “both over- and underinclusive,” as it could include anyone from “high school students in the drama club” to “a violin-playing ER doctor who serenades their patients.”[89] Additionally, this provision, without justification, excludes athletes.[90]

Furthermore, the statute arbitrarily limits the period in which claims against postmortem deepfakes may be brought to forty years after death. New York legislators have not explained their decision to limit the postmortem period to forty years.[91] Of course, rights of action are almost always subject to statutes of limitations, but extending the period beyond forty years (to, perhaps, one hundred years) would better protect the deceased by ensuring those in living memory would not be subject to offensive depictions.[92]

Ultimately, the right of publicity, even in its most modern conception, seems ill-equipped to address the issues posed by postmortem deepfakes. Although enacting a federal right of publicity statute would improve upon the current inconsistent protections resulting from piecemeal doctrine,[93] there remain fundamental flaws with the right of publicity in its foundations. By limiting the right to deepfakes used for commercial purposes, non-celebrities are effectively precluded from litigating claims because their identities lack independent monetary value. To remedy this, future right of publicity statutes could provide more meaningful protections than New York’s law by extending the statute of limitations beyond forty years and clearly limiting First Amendment artistic exceptions to those which contribute to free speech values.

B. The Appropriation Tort

Resembling the elements of the right of publicity, the elements of the appropriation tort are using another’s name or likeness for one’s own use or benefit.[94] The key difference is that “use” in an appropriation claim focuses on mental or emotional harm, rather than commercial harm.[95] Still, the boundary between the appropriation tort and the right of publicity remains blurry, with disagreement even today on how they operate. When creating the four privacy torts, Prosser failed to distinguish between the right of publicity and appropriation, possibly because he viewed them as essentially the same or did not want to “upset the elegance” of the four torts model by splintering the fourth category into appropriation, based on mental and emotional injury, and the right of publicity, based on commercial injury.[96] In early privacy cases, courts used the terms interchangeably, collapsing the right of publicity into appropriation.[97] This stands in contrast to Nimmer’s treatment of the right of publicity as an entirely separate tort[98]—for possibly dubious reasons.[99] Following this, the Court in Zacchini increased the gulf between appropriation and publicity.[100] In Zacchini, the Court held the right of publicity has “little to do with protecting feelings or reputation”[101] but is about “protecting the proprietary interest of the individual in his act” and is “analogous to the goals of patent and copyright law.”[102]

Looking to another famous privacy case, Carson v. Here’s Johnny Portable Toilets, Inc. illustrates why it may be helpful to delineate the right of publicity from the appropriation tort in the context of postmortem deepfakes.[103] Johnny Carson sued a portable toilet company called “Here’s Johnny Portable Toilets, Inc.” for misappropriating his television catchphrase “Here’s Johnny.”[104] Carson claimed an invasion of his privacy and publicity rights.[105] Carson’s appropriation injury was based not on the use of his catchphrase, but on the association of himself with an embarrassing product: portable toilets.[106] In contrast, the claimed commercial injury of the right of publicity claim derived from the use per se of his catchphrase, regardless of any social consequences or feelings of embarrassment.[107] Here, Carson prevailed under the right of publicity and the Sixth Circuit largely disregarded his appropriation claim, reasoning that Carson provided no proof of mental or emotional harm, and, at any rate, the disposition of the case on the right of publicity rendered his privacy claim moot.[108]

What Carson v. Here’s Johnny Portable Toilets, Inc. illustrates is that the appropriation tort may provide a superior avenue for deepfake litigation than the right of publicity because the harms to the average person are more related to the embarrassment of association than the use per se due to a proprietary interest in identity. Although the remedies sought through the publicity and appropriation torts are, for all practical purposes, identical (i.e., both seek compensatory damages and injunctions), this distinction in what triggers each claim­—an undesirable association of any identity versus per se use of a valuable identity—allows the appropriation tort to protect more people. Although appropriation could address some mental and emotional harms from the invasion of privacy, the lesser-used privacy tort of false light may be the more prudent avenue for postmortem deepfake litigation because the nature of deepfakes inherently presents falsehoods.

C. False Light

False light may be uniquely suited to provide redress for injuries from postmortem deepfakes. Liability for false light is predicated upon the publicity of a matter portraying another in a false light with actual malice such that it would be highly offensive to a reasonable person.[109] False light is related to the more well-known tort of defamation.[110] However, false light provides a better avenue for litigating deepfakes because recovery for false light, unlike defamation, does not require reputational damage. Instead, “[i]t is enough that [the plaintiff] is given unreasonable and highly objectionable publicity that attributes to [the plaintiff] characteristics, conduct or beliefs that are false, and so is placed before the public in a false position.”[111] The very nature of deepfakes satisfies this falsity requirement, permitting recovery where defamation would not.[112] Hypothetically, if a deepfake were posted publicly on social media, false light could allow recovery simply for the fact of its posting, whereas defamation would likely require the plaintiff to prove it reached enough people or was so severe as to constitute reputational harm.

Despite its possible benefits to address postmortem deepfakes, false light remains a nebulous area of privacy law because of its similarities to defamation. Prosser himself perceived that false light run amok could be capable of “swallowing up and engulfing the whole law of public defamation.”[113] However, the opposite has happened in practice. Most false light claims are resolved under the New York Times standard, the legal standard for defamation actions.[114] The language in the Second Restatement of Torts mirrors this, as well.[115]

Additionally, postmortem deepfakes must be held to be highly offensive to a reasonable person in order to be litigated under false light. Undoubtedly, pornographic deepfakes meet this standard.[116] But non-pornographic deepfakes present the more difficult question of whether it is inherently offensive to simply be portrayed doing or saying things one has never done or said, regardless of whether the actual conduct depicted is offensive.[117] What’s more, the false depiction must be highly offensive. The highly offensive standard may be more difficult to establish for the issue of postmortem deepfakes, as relatives of the deceased must litigate claims, despite not being the actual subjects of the deepfake. Furthermore, demonstrating mental or emotional injuries caused by the damage to the deceased’s memory may be challenging. Although false light may provide the superior framework to litigate deepfakes of the living, postmortem claims are nevertheless excluded and, thus, the tort cannot offer redress for deepfakes of the deceased.

III. Finding Dignity in False Light

Rather than limiting postmortem claims to commercial interests or mental and emotional claims to the living, privacy law should recognize dignitary harms to the deceased’s legacy caused by postmortem deepfakes. Postmortem deepfakes portray with disturbing realism the deceased doing and saying things they never did. This directly interferes with the dignity of the deceased⁠—an almost universally-accepted principle which American law has declined to recognize. Indeed, the common phrase “do not speak ill of the dead” derives from a fifteenth century Latin translation of Diogenes’s writings from the early third century.[118] Burial rituals are considered sacred in many religions and cultures.[119] Contravening society’s respect for the dignity of those no longer living, postmortem deepfakes effectively exhume the dead and deform them without their consent. Though a person may die, their identity lives on in their legacy and in the memories of those who knew them.

While postmortem deepfakes may cause mental, emotional, and commercial injuries, dignity best addresses the interests at stake for the deceased and their relatives. Warren and Brandeis first framed the right to privacy as one founded in human dignity.[120] Additionally, Europe has long recognized dignity as the foundation for privacy protections.[121] Informational self-determination, the right to control one’s public image by determining what information about oneself is shared and what remains secret,[122] is an integral component of dignity; choosing what parts of oneself to share is fundamental to healthy relationships and identity formation—the very essence of personhood.[123] While American privacy law conceives of privacy as an individual right which dies alongside the individual, dignity recognizes that privacy is more than an individual matter: it is a social value.[124]

Postmortem deepfakes warp relationships between the living and the dead by interfering with reality and replacing memories of a loved one with a crude simulacrum. They disfigure the legacy of the dead and distort the influence of personhood beyond one’s lifetime through false representations. Because these dignitary injuries go to fundamental and sacred components of the human experience, a limited exception in privacy law for postmortem deepfakes ought to be recognized.

Such an exception should permit the litigation of postmortem deepfakes by recognizing dignitary injuries to the deceased and their loved ones. This exception would also not disrupt Prosser’s four tort conception of privacy law because it could fit seamlessly into an existing tort, false light. Due to the highly offensive nature of defiling the dignity of the dead, postmortem deepfakes would be most effectively litigated under the false light tort. Such postmortem exceptions in privacy law have precedent, like those to the no relational right rule.[125] Under this exception, right of publicity claims could still be brought to protect the commercial interests of the deceased’s estate.

IV. Potential Objections

A. First Amendment Protections

A proposed dignitary exception in privacy torts must withstand First Amendment speech protections.[126] Although deepfakes are false speech, they may be afforded protections as part of free expression. Despite past opinions suggesting false statements were unprotected by the First Amendment,[127] a plurality of the Supreme Court concluded in 2012 that false statements are not automatically excluded from First Amendment protection; the government may regulate false statements intended to cause “legally cognizable harm” where there is a causal link between the regulation and the injury it seeks to prevent.[128] This has resulted in a balancing approach wherein a false statement’s propensity to cause serious harms is weighed against the statement’s contribution to free speech values.[129]

Some deepfakes can contribute positively to free speech values like those involving satire or parody, entertainment and the arts, or political critiques.[130] But certain types of deepfakes are categorically considered harmful, like those involving non-consensual pornography or the deliberate spread of misinformation.[131] Because deepfakes involve free expression, the First Amendment forbids an outright ban on their creation.[132] Thus, any regulation of deepfakes must come after they have been created, respective to any serious harms they cause.[133]

Because the Supreme Court has not carved out a First Amendment exemption for false speech, postmortem deepfakes are not unprotected speech per se. Postmortem deepfakes lay somewhere between the two extremes of obviously beneficial and obviously harmful; for this reason, claims against postmortem deepfakes would have to survive a constitutional balancing inquiry.[134] However, First Amendment protections are not impossible to overcome. For example, advertisements using postmortem deepfakes would likely not be afforded First Amendment protection under the Court’s commercial speech doctrine, which permits the prohibition of misleading or false advertising.[135] Ultimately, the limited dignitary exception is unlikely to run afoul of First Amendment protections because of the propensity of postmortem deepfakes to cause dignitary harms. A well-balanced exception would also exempt postmortem deepfakes created with consent and those used for artistic or political purposes.

B. Standing

When developing possible solutions for regulating postmortem deepfakes, the issue of standing must be considered because it is a threshold issue for bringing federal claims. Recent changes in Article III standing doctrine may place restraints on postmortem deepfake litigation in federal courts. In Lujan v. Defenders of Wildlife,[136] the Court held that Article III standing requires the claimant to demonstrate three things: an injury in fact, that the alleged conduct caused said injury, and the redressability of the injury.[137] To demonstrate an injury in fact, a claimant must show “an invasion of a legally protected interest which is (a) concrete and particularized” and “(b) actual or imminent, not ‘conjectural’ or ‘hypothetical.’”[138] The injury must also be more than a “bare procedural violation,” meaning that the claimant must personally experience an actual injury beyond the violation of a congressionally-created statutory right.[139] Recently, the Court further heightened the requirement of concreteness to establish an injury in fact; concreteness now requires the injury in fact bear a close relation to a recognized common law injury.[140] These developments in standing doctrine have limited plaintiffs’ ability to bring claims in federal court; even if a claimant can demonstrate a violation of a federal statutory right, that alone is not enough to be heard in federal court. However, this doctrine does not apply to standing in state courts, where the violation of statutory rights may be sufficient to litigate a claim.[141]

These heightened standing requirements may present difficulties for postmortem deepfake litigation. The Supreme Court has not been sympathetic to claims related to privacy injuries where the actual damages derive from mental, emotional, or dignitary harms.[142] Nevertheless, the Court has granted standing to plaintiffs for the publication of false information because of their injury’s relation to a common law defamation action.[143] While post-Spokeo[144] standing requirements may present a hurdle for litigating dignitary injuries cause by postmortem deepfakes in federal courts, claims could be easily analogized to defamation like the prevailing privacy claims in Ramirez[145] or simply litigated in state court.[146]


Postmortem deepfakes cannot be properly addressed by current privacy law. The right of publicity permits recovery for the deceased but limits redress to commercial injuries, thereby excluding non-celebrities. Privacy torts like appropriation and false light allow recovery for mental and emotional harms, but only for the living. Furthermore, because deepfake litigation relies upon state tort law, protections for those harmed by deepfakes remain uneven and ill-defined. Currently, only New York provides redress for deepfakes of the deceased through its statutory right of publicity. Due to the constraints of current privacy law, a limited exception ought to be created which permits redress of dignitary injuries caused by postmortem deepfakes through the false light tort. Privacy law ought to respond to new injuries which did not exist when Warren and Brandeis first “discovered” privacy law[147] or when Prosser distilled privacy into four torts seventy years later.[148] Today, sixty-two years after Prosser, privacy law needs yet another transformation to protect the dignity of the dead, those who cannot speak for themselves and yet are being forced to speak words not their own.

Olivia Wall[149]*

  1. . See Helen Rosner, The Ethics of a Deepfake Anthony Bourdain Voice, New Yorker (July 17, 2021), https://www.newyorker.com/culture/annals-of-gastronomy/the-ethics-of-a-deepfake-anthony-bourdain-voice [https://perma.cc/PUR7-N97R].

  2. 2. Ottavia (@OttaviaBourdain), Twitter (July 15, 2021, 11:22 PM), https://twitter.com/OttaviaBourdain/status/1415889455005716485?s=20 [https://perma.cc/C2ND-CGY9].

  3. . See Alex Wyman, Defining the Modern Right of Publicity, 15 Tex. Rev. Ent. & Sports L. 167, 168 (2014). The first federal regulation of deepfakes was signed into law in 2019. See National Defense Authorization Act for Fiscal Year 2020, Pub. L. No. 116-92, §§ 5709, 5724 133 Stat. 1198, 2168–70, 2177–78 (2019) [hereinafter 2020 NDAA]. The 2020 NDAA requires monitoring of foreign deepfake threats, establishes a reporting requirement for deepfakes targeting U.S. elections, and encourages the creation of deepfake-detection technologies through a “Deepfakes Prize” competition. Id. For more information on the 2020 NDAA, see Matthew F. Ferraro, Jason C. Chipman & Stephen W. Preston, The Federal “Deepfakes” Law, 3 J. Robotics, A.I. & L. 229 (2020).

  4. . See Emily Willingham, Humans Find AI-Generated Faces More Trustworthy than the Real Thing, Sci. Am. (Feb. 14, 2022), https://www.scientificamerican.com/article/humans-find-ai-generated-faces-more-trustworthy-than-the-real-thing/ [https://perma.cc/L7AV-55U4].

  5. . It is possible to create a deepfake in five minutes “without writing a single line of code.” Dimitris Poulopoulos, How to Produce a DeepFake Video in 5 Minutes, Towards Data Sci. (Apr. 2, 2020), https://towardsdatascience.com/how-to-produce-a-deepfake-video-in-5-minutes-513984fd24b6 [https://perma.cc/ZX2C-SD2D]. Indeed, anyone with an iPhone can produce a deepfake “with a single source photo and zero technical expertise.” See Geoffrey A. Fowler, Anyone with an iPhone Can Now Make Deepfakes. We Aren’t Ready for What Happens Next., Wash. Post (Mar. 25, 2021, 8:00 AM), https://www.washingtonpost.com/technology/2021/03/25/deepfake-video-apps/ [https://perma.cc/N729-PENU].

  6. . As of May 2020, an Amsterdam-based deep learning company identified over 44,000 deepfakes online, 95% of which were pornographic. Sensity, I amsterdam, https://www.iamsterdam.com/en/business/key-sectors/ai/testimonials/sensity [https://perma.cc/L5ZB-3TR5] (last visited Nov. 7, 2022).

  7. . J. Thomas McCarthy & Roger E. Schechter, Rights of Publicity and Privacy § 9:1 (2d ed. 2022).

  8. . Warren and Brandeis styled this idea as “inviolate personality.” See Samuel D. Warren & Louis D. Brandeis, The Right to Privacy, 4 Harv. L. Rev. 193, 205 (1890). Scholar Edward J. Blounstein interpreted this to mean “the individual’s independence, dignity and integrity; . . . man’s essence as a unique and self-determining being.” Edward J. Blounstein, Privacy as an Aspect of Human Dignity: An Answer to Dean Prosser, 39 N.Y.U. L. Rev. 962, 971 (1964).

  9. . Samantha Cole, AI-Assisted Fake Porn Is Here and We’re All Fucked, Motherboard (Dec. 11, 2017, 1:18 PM), https://www.vice.com/en/article/gydydm/gal-gadot-fake-ai-porn [https://perma.cc/9YV6-RTG6].

  10. . “u/deepfakes.” Redditor is the term for a Reddit user.

  11. . Cole, supra note 9.

  12. . David Song, A Short History of Deepfakes, Medium (Sept. 23, 2019), https://medium.com/@songda/a-short-history-of-deepfakes-604ac7be6016 [https://perma.cc/THD6-WVBB].

  13. . Samantha Cole, We Are Truly Fucked: Everyone Is Making AI-Generated Fake Porn Now, Motherboard (Jan. 24, 2018, 12:13 PM), https://www.vice.com/en/article/bjye8a/reddit-fake-porn-app-daisy-ridley [https://perma.cc/B33Y-KD5P].

  14. . Id.

  15. . Mika Westerlund, The Emergence of Deepfake Technology: A Review, 9 Tech. Innovation Mgmt. Rev. 39, 40–41 (2019).

  16. . Vivek Sharma, How Do Neural Networks Mimic the Human Brain?, USC Marshall Sch. of Bus. (Nov. 6, 2017), https://www.marshall.usc.edu/blog/how-do-neural-networks-mimic-human-brain [https://perma.cc/M2VC-4B67].

  17. . Russell Spivak, “Deepfakes”: The Newest Way to Commit One of the Oldest Crimes, 3 Geo. L. Tech. Rev. 339, 342–45 (2019).

  18. . Ian Goodfellow et al., Generative Adversarial Networks, 63 Commc’ns Ass’n Computing Indus. 139 (2020), https://dl.acm.org/doi/pdf/10.1145/3422622 [https://perma.cc/84HA-6CX4]. Ian Goodfellow, Introduction to GANS, NIPS 2016, YouTube (Aug. 24, 2017), https://youtu.be/9JpdAg6uMXs [https://perma.cc/3MEM-ZYFC].

  19. . Spivak, supra note 17 at 343.

  20. . Id.

  21. . Id.

  22. . Audio files may also be deepfakes. The algorithm used to create audio deepfakes works similarly to the one for video deepfakes, utilizing automatic voice recognition (AVR) technology. AVR works by distinguishing between two voices, presented and enrolled, to determine whether they are coming from the same speaker. The goal is for the two voices to sound indistinguishable to AVR, which means a convincing audio deepfake has been created. See Dominic David, Analyzing the Rise of Deepfake Voice Technology, Forbes Tech. Council, (May 10, 2021, 8:00 AM), https://www.forbes.com/sites/forbestechcouncil/2021/05/ 10/analyzing-the-rise-of-deepfake-voice-technology/?sh=6ef9c2e36915 [https://perma.cc/C8YN-34YU].

  23. . Spivak, supra note 17, at 343–44.

  24. . See Willingham, supra note 4.

  25. . See Nils C. Köbis, Barbora Doležalová & Ivan Soraperra, Fooled Twice: People Cannot Detect Deepfakes but Think They Can, 24 iScience 1 (2021). https://www.cell.com/action/showPdf?pii=S2589-0042%2821%2901335-3 [https://perma.cc/H524-JFLN].

  26. . See, e.g., Karen Hao & Will Douglas Heaven, The Year Deepfakes Went Mainstream, MIT Tech. Rev. (Dec. 24, 2020), https://www.technologyreview.com/2020/12/24/1015380/best-ai-deepfakes-of-2020/ [https://perma.cc/AWS4-EE2Q].

  27. . One TikTok user with over three million followers, “@deeptomcruise,” creates hyper-realistic deepfakes of actor Tom Cruise playing Dave Matthews Band covers, @deeptomcruise, TikTok (May 10, 2021), https://www.tiktok.com/@deeptomcruise/video/6960707912863943941 [https://perma.cc/PD5X-AF8C] and praising the bubble gum centers of candy suckers, @deeptomcruise, TikTok (May 10, 2021), https://www.tiktok.com/@deeptomcruise/video/6936178521114938630 [https://perma.cc/4RGT-3F8G]. In one particularly meta deepfake, Cruise performs a magic trick, winkingly assuring viewers “It’s all real, it’s all the real thing.” @deeptomcruise, TikTok (Feb. 25, 2021), https://www.tiktok.com/@deeptomcruise/video/6933305746130046214 [https://perma.cc/R5D9-UGAB].

  28. . For example, TikTok users utilize deepfake technology to create memes; one depicts famous and unfamous people alike singing the karaoke song “Baka Mitai” from a Yakuza video game. See Cal Jeffrey, Bizarre-looking Deepfake Memes Are Easy-to-Make with Online Tools, TechSpot (Aug. 31, 2020, 5:30 PM), https://www.techspot.com/news/86576-bizarre-looking-deepfake-memes-easy-make-online-tools.html [https://perma.cc/F3BF-5R2V].

  29. . Nonpartisan political advocacy group RepresentUs created deepfake ads which depicted Vladimir Putin and Kim Jong-un saying America did not require outside election interference to ruin its own democracy. See Karen Hao, Deepfake Putin Is Here To Warn Americans About Their Self-Inflicted Doom, MIT Tech. Rev. (Sept. 29, 2020), https://www.technologyreview.com/2020/09/29/1009098/ai-deepfake-putin-kim-jong-un-us-election/ [https://perma.cc/8PCT-X3QR]. Additionally, a 2018 video of President Obama (voiced by comedian Jordan Peele) appeared to say things like “President Trump is a total dipshit” before revealing the truth behind the deepfake technology. See Tim Mak, Where Are the Deepfakes in This Presidential Election, NPR (Oct. 1, 2020, 5:05 AM), https://www.npr.org/2020/10/01/918223033/where-are-the-deepfakes-in-this-presidential-election [https://perma.cc/C2CB-C2YY].

  30. . In 2020, Indian politician Manoj Tiwari used a deepfake to depict himself speaking the most popular Hindi dialect among migrant workers, Haryanvi, to discourage them from voting for the rival political party; in the original video, Tiwari spoke English. See Nilesh Christopher, We’ve Just Seen the First Use of Deepfakes in an Indian Election Campaign, Vice (Feb. 18, 2020, 6:27 AM), https://www.vice.com/en/article/jgedjb/the-first-use-of-deepfakes-in-indian-election-by-bjp [https://perma.cc/A2DS-ZHPS].

  31. . See Bobby Allyn, Deepfake Video of Zelenskyy Could Be ‘tip of the iceberg’ in Info War, Experts Warn, NPR (Mar. 16, 2022, 8:26 PM), https://www.npr.org/2022/03/16/1087062648/deepfake-video-zelenskyy-experts-war-manipulation-ukraine-russia [https://perma.cc/RQ8E-3DKQ].

  32. . The first heist occurred in 2019, where a deepfake of a British energy company executive’s voice was used to trick the company’s managing director to wire $240,000 into a “secret account” in Hungary, which was owned by the conspirators. See Drew Harwell, An Artificial-Intelligence First: Voice-Mimicking Software Reportedly Used in a Major Theft, Wash. Post (Sept. 4, 2019, 6:27 PM), https://www.washingtonpost.com/technology/2019/09/04/an-artificial-intelligence-first-voice-mimicking-software-reportedly-used-major-theft/ [https://perma.cc/Y8RK-YFUC]. In early 2020, conspirators used an audio deepfake of a company director to convince a Hong Kong bank manager to authorize $35 million in transfers; U.A.E. police are currently investigating this incident in connection with a global scheme of heists involving at least 17 people. See Thomas Brewster, Fraudsters Cloned Company Director’s Voice in $35 Million Bank Heist, Police Find, Forbes (Oct 14, 2021, 7:01 AM), https://www.forbes.com/sites/thomasbrewster/2021/10/14/huge-bank-fraud-uses-deep-fake-voice-tech-to-steal-millions/?sh=3634f0f17559 [https://perma.cc/3JPL-RZ45].

  33. . Francessa Panetta & Halsey Burgund, In the Event of Moon Disaster, MIT Ctr. for Advanced Virtuality (Nov. 2019), https://moondisaster.org/ [https://perma.cc/69NN-FTLD]. For a transcript of the original speech, see William Safire, Speechwriter for President Richard Nixon, In Event of Moon Disaster (July 18, 1969), https://www.archives.gov/presidential-libraries/events/centennials/nixon/exhibit/nixon-online-exhibit-disaster.html [https://perma.cc/B5SV-Y9HT].

  34. . Rich Haridy, Deepfake Tech Used To Bring Dead Relatives Back to Life, New Atlas (Feb. 28, 2021), https://newatlas.com/computers/deepfake-nostalgia-myheritage-animate-deceased-relatives/ [https://perma.cc/BNR2-5PD3]. The creators of Deep Nostalgia promise they will not enable speech features (e.g., lip motion or voices to videos) and ask consumers to only use their own historical photographs. See Deep Nostalgia, MyHeritage, https://www.myheritage.com/deep-nostalgia [https://perma.cc/QU9P-NR7H] (last visited Nov. 7, 2022). However, the company broke this commitment in its own advertisement of the technology featuring a deepfake of Abraham Lincoln. See MyHeritage, Abraham Lincoln Discovers His Family History on MyHeritage, YouTube (Feb. 11, 2021), https://www.youtube.com/watch?v=kEtiajHLmQY [https://perma.cc/HW4R-SPNX].

  35. . The video depicted Kardashian’s deceased father, attorney Robert Kardashian, delivering a speech seventeen years after his death wherein he praised his daughter’s beauty, success, and “marr[iage to] the most, most, most, most, most[] genius man in the whole world—Kanye West.” See Matthew Dunne-Miles, Deepfakes, Dead Relatives and Digital Resurrection, Face (Apr. 6, 2021), https://theface.com/society/deepfakes-dead-relatives-deep-nostalgia-ai-digital-resurrection-kim-kardashian-rob-kardashian-grief-privacy [https://perma.cc/6WZK-MMRJ].

  36. . Tamara Kneese, How Data Can Create Full-On Apparitions of the Dead, Slate (Nov. 2, 2020, 6:14 PM), https://slate.com/technology/2020/11/robert-kardashian-joaquin-oliver-deepfakes-death.html [https://perma.cc/HHT4-8EJY].

  37. . Ben Laney, Bringing the Dead Back to Life: Preparing the Estate for A Post-Mortem Acting Role, 12 Est. Plan. & Cmty. Prop. L.J. 349, 354 (2020).

  38. . Id. Furious 7 utilized this method so Paul Walker could posthumously star in the lead role; the movie created a deepfake of Walker from a composite of 350 CGI shots wherein Walker’s face from past footage or digital recreation was swapped onto body doubles played by his brothers. Id.

  39. . Vincent Villani, Film Technology Brings the Unreal to Life on Screen. But Has it Gone too Far?, Banner (Apr. 23, 2021), https://thebannercsi.com/2021/04/23/so-obsessed-with-the-idea-that-you-could/ [https://perma.cc/8CK4-6HVX].

  40. . Laney, supra note 37, at 356–57.

  41. . Catherine Shoard, Peter Cushing Is Dead. Rogue One’s Resurrection Is a Digital Indignity, Guardian (Dec. 21, 2016), https://www.theguardian.com/commentisfree/2016/dec/21/peter-cushing-rogue-one-resurrection-cgi [https://perma.cc/SS8A-LUTT].

  42. . Rich Haridy, Star Wars: Rogue One and Hollywood’s Trip Through the Uncanny Valley, New Atlas (Dec. 19, 2016), https://newatlas.com/star-wars-rogue-one-uncanny-valley-hollywood/47008/ [https://perma.cc/689M-RSGD]. However, one critic defended the deepfake as “an organic continuation” of technological evolution and suggested the film deserved an Oscar for special effects. See Michael Cavna, One of the Best Performances in ‘Rogue One’ Is by an Actor Who Died in 1994, Wash. Post (Dec. 15, 2016, 10:15 AM), https://www.washingtonpost.com/news/comic-riffs/wp/2016/12/15/one-of-the-best-performances-in-rogue-one-is-by-an-actor-who-died-in-1994/ [https://perma.cc/4LUJ-7482]. Perhaps to avoid similar backlash, Disney chose to digitally de-age Carrie Fisher for her posthumous role as Princess Leia in Rogue One, using pre-recorded footage from “The Last Jedi” rather than deepfake technology. See Joey Paur, Deepfake Tech Drastically Improves CGI Princess Leia in ROGUE ONE, GeekTyrant, https://geektyrant.com/news/deepfake-tech-drastically-improves-cgi-princess-leia-in-rouge-one [https://perma.cc/P2NF-M9A6] (last visited Nov. 7, 2022). YouTube user “Shamook” demonstrated the limits of deepfake alternatives by improving upon Disney’s CGI likeness of Fisher; in comparison to Disney’s multi-million-dollar special effects budget, Shamook’s more realistic rendering only required one day, 500 images of Fisher, and an $800 personal computer. Id.

  43. . As the European Data Protection Supervisor states, “in the EU, human dignity is recognised as an absolute fundamental right. In this notion of dignity, privacy or the right to a private life, to be autonomous, in control of information about yourself, to be let alone, plays a pivotal role.” Data Protection, Eur. Data Prot. Supervisor, https://edps.europa.eu/data-protection/data-protection_en [https://perma.cc/Q85L-HQG7] (last visited Nov. 7, 2022).

  44. . McCarthy & Schechter, supra note 7, § 9.1 (noting courts and commentators unanimously agree on this rule).

  45. . Id.

  46. . Id.

  47. . Id.

  48. . Id.

  49. . Id. The U.S. Supreme Court held that photos from scenes of a suicide qualified under FOIA’s invasion of personal privacy exemption. Nat’l Archives & Recs. Admin. v. Favish, 541 U.S. 157, 168–69 (2004). The Georgia Supreme Court held that parents could claim an invasion of privacy against the unauthorized publication of a photograph of their dead child’s deformed body. Bazemore v. Savannah Hosp., 155 S.E. 194 (1930). Finally, the California Court of Appeals held that parents had an invasion of privacy claim against two patrol officers who emailed photographs of their daughter’s decapitated body to their friends and family for a Halloween prank. Catsouras v. Dep’t of Cal. Highway Patrol, 181 Cal. App. 4th 856, 874 (2010).

  50. . McCarthy & Schechter, supra note 7, § 9.1 (noting courts and commentators unanimously agree on this rule).

  51. . Id. This law was upheld as constitutional when famous race car driver Dale Earnhardt’s family successfully won an injunction prohibiting the county medical examiner from releasing his autopsy photos. Id. (citing Earnhardt ex rel. Est. of Earnhardt v. Volusia Cnty., 2001 WL 992068 (Fla. Cir. Ct. 2001)).

  52. . Id.

  53. . Mark Roesler & Garrett Hutchinson, What’s in A Name, Likeness, and Image? The Case for A Federal Right of Publicity Law, 13 Landslide 20 (2020).

  54. . Id.

  55. . The phrase “right of publicity” first appeared in 1952. Haelan Lab’ys. Inc. v. Topps Chewing Gum, Inc., 202 F.2d 866, 868 (2d Cir. 1953).

  56. . Warren & Brandeis, supra note 8. This article developed the theory of the “right of privacy” based on the right to be let alone. Privacy rights uphold human dignity and maintain the law ought to protect against public disclosure of private facts. Id.

  57. . Roberson v. Rochester Folding Box Co., 64 N.E. 442 (N.Y. 1902).

  58. . 1903 N.Y. Laws 308. For further discussion of the Roberson case and New York’s law, see McCarthy & Schechter, supra note 7, § 1.16.

  59. . Jennifer E. Rothman, New York, Rothman’s Roadmap to the Right of Publicity [hereinafter Rothman, New York], https://www.rightofpublicityroadmap.com/law/new-york [https://perma.cc/9JQB-B276] (last visited Nov. 7, 2022). Two years later, the Georgia Supreme Court recognized the common law to privacy in Pavesich v. New Eng. Life Ins. Co., 50 S.E. 68, 80–81 (1905) (holding a girl’s photograph used for life insurance advertisement without knowledge or consent violated the right to privacy).

  60. . Melville B. Nimmer, The Right of Publicity, 19 L. & Contemp. Probs. 203 (1954).

  61. . See J. Thomas McCarthy, Melville B. Nimmer and the Right of Publicity: A Tribute, 34 UCLA L. Rev. 1703, 1706–07 (1987). Law professor William Prosser also recognized the right of publicity in the appropriation tort. William Prosser, Privacy, 48 Calif. L. Rev. 383, 406–07 (1960). Prosser famously outlined the four privacy torts: intrusion, disclosure, false light, and appropriation. Id. at 389.

  62. . McCarthy & Schechter, supra note 7, § 9.1. The Second Restatement of Torts explains “privacy is a personal right, peculiar to the individual whose privacy is invaded. The cause of action is not assignable, and it cannot be maintained by other persons such as members of the individual’s family, unless their own privacy is invaded along with his. The only exception to this rule involves the appropriation to the defendant’s own use of another’s name or likeness.” Restatement (Second) of Torts § 652I (Am. L. Inst. 1977).

  63. . Zacchini v. Scripps-Howard Broad. Co., 433 U.S. 562, 576–77 (1977). For an in-depth discussion of Zacchini, see Jennifer E. Rothman, The Right of Publicity: Privacy Reimagined for a Public World, 75–81 (2018).

  64. . See Right of Publicity, Int’l Trademark Ass’n, https://www.inta.org/topics/right-of-publicity/ [https://perma.cc/DGE3-NQ8N] (last visited Nov. 7, 2022). See also Roesler & Hutchinson, supra note 53, at 21.

  65. . McCarthy & Schechter, supra note 7, § 6:2. For more details on state right of publicity law, see Jonathan Faber, Statutes & Interactive Map, Right of Publicity, https://rightofpublicity.com/statutes [https://perma.cc/4BZB-YZJ5] (last visited Nov. 7, 2022).

  66. . Wyman, supra note 3, at 169 (comparing the right across the Second, Third, Sixth, Eighth, Ninth, and Tenth Circuits).

  67. . Robert C. Post & Jennifer E. Rothman, The First Amendment and the Right(s) of Publicity, 130 Yale L.J. 86, 90 (2020) (citing Brian D. Wassom, Identity and Its Consequences: The Importance of Self-Image, Social Media, and the Right of Publicity to IP Litigators, in Litigation Strategies for Intellectual Property Cases: Leading Lawyers on Analyzing Key Decisions and Effectively Litigating IP Cases 37, 43 (2012); Rothman, supra note 63, at 61–62, 87–97.

  68. . Post & Rothman, supra note 67, at 89–90.

  69. . Id.

  70. . In California, the postmortem right of publicity provision probably only requires a demonstration of commercial value at the time of death or due to the death, not that one developed commercial value during one’s lifetime by exploiting their likeness. See Jennifer E. Rothman, California, Rothman’s Roadmap to the Right of Publicity https://rightofpublicityroadmap.com/state_page/california/ [https://perma.cc/DPL7-9P4T] (last visited Nov. 7, 2022).

  71. . See Jennifer E. Rothman, Washington, Rothman’s Roadmap to the Right of Publicity https://rightofpublicityroadmap.com/state_page/washington/ [https://perma.cc/ZD3T-EXA8] (last visited Nov. 7, 2022).

  72. . See Jennifer E. Rothman, Hawaii, Rothman’s Roadmap to the Right of Publicity, https://rightofpublicityroadmap.com/state_page/hawaii/ [https://perma.cc/6PDB-YCQ3] (last visited Nov. 7, 2022).

  73. . See Christian B. Ronald, Note, Burdens of the Dead: Postmortem Right of Publicity Statutes and the Dormant Commerce Clause, 42 Colum. J.L. & Arts 123, 125–26 (2018).

  74. . This mirrors William Prosser’s categorization of privacy torts into four types in his 1960 article. See Prosser, supra note 61, at 389; Post & Rothman, supra note 67, at 92.

  75. . Post & Rothman, supra note 67, at 92 (“These torts protect, respectively, plaintiffs’ interests in controlling the use of their performances, in preserving the commercial value of their identity, in protecting the autonomy of their personality, and in maintaining the dignity of their person. In any given right of publicity action, one or more of these four distinct interests may be at stake.”) (footnote omitted).

  76. . Id. at 102 n.68. However, Post and Rothman admit that copyright law may provide guidance; just as copyrights of a book survive an author, Post and Rothman suggest extending the right of performance category of publicity for a limited amount of time postmortem (e.g., less than twenty-five years). Id.

  77. . See Rothman, New York, supra note 59. The bill is S5959D and is codified in N.Y. Civ. Rights Law § 50-f (McKinney 2021).

  78. . S5959D, 2019–2020 Reg. Sess. (N.Y. 2020). In 2017, the New York legislature struck down the first attempt to amend its right of publicity, Assembly Bill A8155B. A8155B, 2017–2018 Reg. Sess. (N.Y. 2017). A8155 was met with significant opposition by the film industry for not carving out an exception for fictionalization of a celebrity’s image. See Brian Higgins, At the Intersection of AI, Face Swapping, Deep Fakes, Right of Publicity, and Litigation, A.I. Tech. & the L. (June 17, 2018), http://aitechnologylaw.com/2018/06/at-the-intersection-of-ai-face-swapping-deep-fakes-right-of-publicity-and-litigation/ [https://perma.cc/7TX6-RF8L].

  79. . The statute defines digital replica as “a newly created, original, computer-generated, electronic performance by an individual in a separate and newly created, original expressive sound recording or audiovisual work in which the individual did not actually perform, that is so realistic that a reasonable observer would believe it is a performance by the individual being portrayed and no other individual.” Civ. Rights § 50-f(1)(c). The term “deepfake” here is used synonymously with “digital replica” in the statute. The statute excludes electronic reproductions of an individual’s performance or other duplications consisting entirely of the “independent fixation of other sounds, even if such sounds imitate or simulate the voice of the individual.” Id.

  80. . Civ. Rights § 50-f(2)(a) (“Any person who uses a deceased personality’s name, voice, signature, photograph, or likeness, in any manner, on or in products, merchandise, or goods, or for purposes of advertising or selling, or soliciting purchases of, products, merchandise, goods, or services, without prior consent from the person or persons specified in subdivision four of this section, shall be liable for any damages sustained by the person or persons injured as a result thereof.”).

  81. . Civ. Rights § 50-f(3). A previous amended version of the bill expressly distinguished the right of publicity as independent from the right of privacy. S5959B § 2, 2019–2020 Reg. Sess. (N.Y. 2020).

  82. . Civ. Rights § 50-f(2)(b) (“Any person who uses a deceased performer’s digital replica in a scripted audiovisual work as a fictional character or for the live performance of a musical work shall be liable for any damages sustained by the person or persons injured as a result thereof if the use occurs without prior consent.”).

  83. . Civ. Rights § 50-f(8) (“An action shall not be brought under this section by reason of any use of a deceased personality’s name, voice, signature, photograph, or likeness occurring after the expiration of forty years after the death of the deceased personality.”).

  84. . See Jennifer E. Rothman, New York Reintroduces Much Improved Postmortem Right of Publicity Bill, Rothman’s Roadmap to the Right of Publicity (July 14, 2020) [hereinafter Rothman, New York Reintroduces], https://rightofpublicityroadmap.com/news_commentary/new-york-reintroduces-much-improved-postmortem-right-publicity-bill/ [https://perma.cc/U5XM-WHM8].

  85. . See 8 Best Deepfake Apps and Tools in 2022, RankRed (Sept. 1, 2022), https://www.rankred.com/best-deepfake-apps-tools/ [https://perma.cc/FPN7-6CBU].

  86. . See supra Part I.

  87. . Civ. Rights § 50-f(2)(b).

  88. . Civ. Rights § 50-f(1)(a).

  89. . See Rothman, New York Reintroduces, supra note 84.

  90. . Id.

  91. . Id.

  92. . But see Rothman, supra note 63, at 184 (“any postmortem right of publicity should be narrow in scope and duration . . . perhaps for a shorter postmortem period of ten to twenty-five years”).

  93. . See Hayley Duquette, Digital Fame: Amending the Right of Publicity to Combat Advances in Face-Swapping Technology, 20 J. High Tech. L. 82, 115–18 (2020); Roesler & Hutchinson, supra note 53, at 23–24.

  94. . Restatement (Second) of Torts § 652C (Am. L. Inst. 1977).

  95. . See McCarthy & Schechter, supra note 7, § 5:62 (“A ‘bare bones’ list of the common law elements to plead and prove to establish a prima facie claim for invasion of appropriation privacy is the following: (1) Defendant, without permission, has used some aspect of the plaintiff’s identity or persona in such a way that plaintiff is identifiable from defendant’s use. (2) Defendant’s use causes some damage to plaintiff’s peace of mind and dignity, with resulting injury measured by plaintiff’s mental or physical distress and related damage.”) (footnotes omitted).

  96. . Id. § 1:23.

  97. . Id. (referencing the Roberson and Pavesich decisions). In the early 20th century, a Columbia Law Review article and the Kentucky Court of Appeals also treated the terms “right of publicity” and “right of privacy” synonymously. See Rothman, supra note 63, at 27–29.

  98. . See supra Section II.A.

  99. . Rothman contended Nimmer’s article inaccurately framed the right of publicity as novel legal territory, rather than recognizing its longstanding origins in privacy law, to further his interests as an aspiring professor who wished to publish a splashy piece of scholarship, and as an attorney for Paramount Studios, who stood to profit from the “creation” of a fully transferable right of publicity rooted in property. Rothman, supra note 63, at 68–71.

  100. . Zacchini v. Scripps-Howard Broadcasting Co., 433 U.S. 562, 573 (1977). Despite this case significantly reframing the privacy law landscape as one distinct from the commercial interest of publicity, the Court did not rule on the facts. The case was remanded back to the trial court where the parties settled, leaving no determination of the facts. Rothman, supra note 63, at 80.

  101. . Zacchini, 433 U.S. at 573. Although the Court was contrasting the right of publicity with the false light tort, in reference to Time, Inc. v. Hill, 385 U.S. 374 (1967), this explanation also applies to the appropriation tort because both seek to address the same mental and emotional injuries caused by the invasion of privacy. Zacchini, 433 U.S. at 569–73.

  102. . Id. at 573.

  103. . 698 F.2d 831 (6th Cir. 1983).

  104. . Id. at 832–33.

  105. . Id. at 832.

  106. . Id. at 834.

  107. . Id. at 834–35, 837.

  108. . Id. at 834.

  109. . Restatement (Second) of Torts § 652E (Am. L. Inst. 1977) (“One who gives publicity to a matter concerning another that places the other before the public in a false light is subject to liability to the other for invasion of privacy, if (a) the false light in which the other was placed would be highly offensive to a reasonable person, and (b) the actor had knowledge of or acted in reckless disregard as to the falsity of the publicized matter and the false light in which the other would be placed.”).

  110. . The elements of defamation are “(a) a false and defamatory statement concerning another; (b) an unprivileged publication to a third party; (c) fault amounting at least to negligence on the part of the publisher; and (d) either actionability of the statement irrespective of special harm or the existence of special harm caused by the publication.” Id. § 558. The adverse effect on reputation distinguishes a per se defamatory statement from one which is “so trivial that an imputation of them is not defamatory at all and can therefore support an action for neither slander nor libel.” Id. § 569, cmt. d.

  111. . Id. at § 652E, cmt. b.

  112. . The Ninth Circuit found that Nevada law permitted recovery under false light where the plaintiff experienced subjective emotional distress from falsehoods even though “they cause[d] no loss of esteem” such that she could not recover under defamation. Flowers v. Carville, 310 F.3d 1118, 1132 (9th Cir. 2002).

  113. . Prosser, supra note 61, at 401.

  114. . McCarthy & Schechter, supra note 7, § 1:22. The New York Times standard requires a showing of “actual malice” (i.e., “knowledge that the statements are false or in reckless disregard of the truth”). Time, Inc. v. Hill, 385 U.S. 374, 387 (1967).

  115. . Restatement (Second) of Torts § 652E (Am. L. Inst. 1977).

  116. . For further discussion of false light claims for pornographic deepfakes, see Harris, infra note 131, at 115–19.

  117. . See supra note 27 and accompanying text.

  118. . See De mortuis nil nisi bonum, Wikipedia, https://en.wikipedia.org/wiki/De_mortuis _nil_nisi_bonum [https://perma.cc/WL3S-BFK5] (last updated Apr. 26, 2022).

  119. . See Julie Kirk, Death Rituals & Traditions Around the Globe, Love to Know, https://dying.lovetoknow.com/Death_Rituals [https://perma.cc/5EE6-PHPP].

  120. . See Samuel D. Warren & Louis D. Brandeis, The Right to Privacy, 4 Harv. L. Rev. 193, 205 (1890); see also Neil Richards, Why Privacy Matters (2021).

  121. . See Holly Kathleen Hall, Restoring Dignity and Harmony to United States-European Union Data Protection Regulation, 23 Comm. L. & Pol’y 125, 135 (2018).

  122. . Id.

  123. . See Robert C. Post, Three Concepts of Privacy, 89 Geo. L.J. 2087, 2092–94 (2001); see also Richards, supra note 120.

  124. . Post, supra note 123, at 2094 (“Privacy as dignity locates privacy in precisely the aspects of social life that are shared and mutual. Invading privacy causes injury because we are socialized to experience common norms as essential prerequisites of our own identity and self-respect.”). See also Data Protection, supra note 43 (“Privacy is not only an individual right but also a social value.”)

  125. . See supra Part I.

  126. . See Jack Langa, Deepfakes, Real Consequences: Crafting Legislation to Combat Threats Posed by Deepfakes, 101 B.U. L. Rev. 761, 780­–81 (2021).

  127. . Id. at 780, n.109 (quoting Hustler Mag., Inc. v. Falwell, 485 U.S. 46, 52 (1988) (“False statements of fact are particularly valueless; they interfere with the truth-seeking function of the marketplace of ideas, and they cause damage to an individual’s reputation that cannot easily be repaired by counterspeech, however persuasive or effective.”)).

  128. . United States v. Alvarez, 567 U.S. 709, 719, 725 (2012) (plurality opinion).

  129. . Langa, supra note 126, at 781.

  130. . Nina I. Brown, Deepfakes and the Weaponization of Disinformation, 23 Va. J.L. & Tech. 1, 32–33 (2020).

  131. . Douglas Harris, Deepfakes: False Pornography Is Here and the Law Cannot Protect You, 17 Duke L. & Tech. Rev. 99, 115–19 (2019); see also Lindsey Wilkerson, Still Waters Run Deep(fakes): The Rising Concerns of “Deepfake” Technology and Its Influence on Democracy and the First Amendment, 86 Mo. L. Rev. 407, 428–30 (2021).

  132. . Brown, supra note 130, at 34.

  133. . Id. at 34–37.

  134. . Id.

  135. . In re R.M.J., 455 U.S. 191, 203 (1982) (“Misleading advertising may be prohibited entirely.”).

  136. . Lujan v. Defenders of Wildlife, 504 U.S. 555 (1992).

  137. . Id. at 560–61.

  138. . Id. at 560 (citations omitted).

  139. . Spokeo, Inc. v. Robins, 578 U.S. 330, 339–41 (2016). For more discussion on the effects of Spokeo on standing, see The Honorable Henry E. Hudson, Christopher M. Keegan & P. Thomas DiStanislao III, Standing in A Post-Spokeo Environment, 30 Regent U. L. Rev. 11 (20182017).

  140. . TransUnion LLC v. Ramirez, 141 S. Ct. 2190, 2204 (2021).

  141. . Rosenbach v. Six Flags Ent. Corp., 129 N.E.3d 1197 (Ill. 2019) (holding plaintiff did not need to demonstrate actual damage beyond the violation of her rights under the Biometric Information Privacy Act).

  142. . Doe v. Chao, 540 U.S. 614, 617–18 (2004) (denying plaintiff relief because he did not meet the “actual damages” requirement for a violation of the Privacy Act where his only evidence of emotional distress was claiming to be “torn . . . all to pieces” and “greatly concerned and worried” about the disclosure of his Social Security Number); F.A.A. v. Cooper, 566 U.S. 284 (2012) (denying plaintiff relief for the violation of his rights under the Privacy Act because the civil remedy provision excludes from “actual damages” mental or emotional distress).

  143. . Ramirez, 141 S. Ct. at 2208–09.

  144. . Spokeo, Inc. v. Robins, 578 U.S. 330 (2016).

  145. . TransUnion LLC v. Ramirez, 141 S. Ct. 2190 (2021).

  146. . See supra note 141 and accompanying text.

  147. . See Warren & Brandeis, supra note 8.

  148. . See Prosser, supra note 61.

  149. * J.D. Candidate (2023), Washington University School of Law; B.S. (2018), Corban University. Thank you to Professor Neil Richards, The Cordell Institute for Policy in Medicine & Law, and the Washington University Law Review staff for providing me with feedback and help throughout the editing process. Most of all, thank you to my family for providing encouragement and support in pursuing my dream of law school.

Published by Olivia Wall

J.D. Candidate (2023), Washington University School of Law; B.S. (2018), Corban University.

%d bloggers like this: