Right of Publicity Bill Would Federally Regulate AI-Generated Fakes | Womble Bond Dickinson
With deepfakes going around the Internet, the ability of AI tools to generate convincing and hyperreal replicas of a person’s likeness has raised a number of new issues, including the ability of rights of publicity (or “name, image and likeness” laws) to protect against improper use. Currently in the US, the rights of publicity are governed largely by state law, and the scope and level of protection varies significantly from one state to the next. For example, California, Nebraska, Ohio, Oklahoma, Pennsylvania, Tennessee, Texas, and Wisconsin provide both statutory and common law rights that are expansive (in Indiana and Oklahoma, the publicity right survives 100 years after the person’s death). Some states give middling protection, either by common law or statute. Alaska, Colorado, Delaware, Idaho, Iowa, Kansas, Maine, Maryland, Mississippi, Montana, North Carolina, North Dakota, Oregon, Vermont, and Wyoming provide no protection. The proposed NO FAKES Act, released on October 12, 2023 by a bipartisan group of senators, aims to establish the first federal right to protect the image, voice, and visual likeness of individuals in the wake of a flood of AI-generated replicas.
The proposed NO FAKES Act would create a right for an individual (or an appropriate right holder) up until 70 years after the individual’s death to authorize the use of the individual’s image, voice, or visual likeness in a digital replica. Unauthorized production of a digital replica, or the act of making public an unauthorized digital replica (such as by publication, distribution, or transmission) with knowledge of lack of authorization, provides a civil cause of action and can be liable for $5,000 per violation or any damage suffered by the party. Lack of knowledge that the digital replica was unauthorized can be a defense to liability for making the digital replica available to the public. However, displaying a disclaimer stating that the digital replica is unauthorized by the applicable individual or rights holder cannot be a defense. The bill explicitly provides no preemption, such that this federal law would provide additional protection to existing state laws, placing a universal bottom-line across all states.
The NO FAKES bill represents a much-needed effort to bring federal uniformity to the protections of images, voices, and visual likeness from improper use through digital replicas. However, the following points in the bill may warrant further evaluation:
- The breadth and subjectivity of what constitutes “digital replicas,” the subject of regulation
The bill would regulate soundalike and lookalike digital replicas based on a subjective standard of whether the digital replica is “nearly indistinguishable” from that of the claimant or is “readily identifiable” as the claimant, regardless of whether the creator/publisher/distributor of the digital replica intends or makes an affirmative representation that the digital replica should be recognized as some particular individual. The breadth and subjectivity of this standard could lead to unintended consequences, including claims by individuals who subjectively claim that a digital creation “looks like me” or “sounds like me.”
In fact, the bill does not mention a person’s name, unlike the state-protected right of publicity that typically covers a person’s name against misappropriation for commercial benefit. Including a requirement that there be some kind of explicit (direct or indirect) assertion that the image, voice, or likeness is attributed to a particular human may facilitate the protection that the bill intends to achieve.
Further, a “digital replica” is defined as a “newly-created, computer-generated, electronic representation.” This definition is broad – much broader than just deepfakes and the products of generative AI. It could cause undue limitations on use of existing non-AI technologies, such as digital photography and digital recording.
- Are the exceptions as currently drafted properly tailored to protect the publicity rights of individuals against generative AI?
The bill provides a long list of liability exceptions, including when the digital replica is used as part of a news, public affairs, sports broadcast or report, documentary, docudrama, or historical or biographical work; for purposes of comment, criticism, scholarship, satire, or parody; or in an advertisement or commercial announcement of the same. While these exceptions appear to carve out conduct protected by the First Amendment, the bill could still raise First Amendment issues in other circumstances (such as use in merchandise,1 comic books,2 video games,3 or prints,4 for which use of a person’s likeness was upheld under the First Amendment by some courts). On the other hand, the list of exceptions includes a number of areas where use of AI deepfakes are causing problems, and in those situations, would fail to regulate users of deepfakes and undermine the protection that the bill aims to achieve.
- Can authorization be implied?
The bill does not provide a clear definition of authorization or consent to be obtained. It appears that alleging lack of authorization would be enough to allege violation of the act as drafted. On the other hand, there may be circumstances in which authorization by the individual is implied, such as through social media.
Stay tuned on the development of the NO FAKES bill.
1Rosa & Raymond Parks Inst. for Self Dev. v. Target Corp., 90 F. Supp. 3d 1256, 1263-65 (M.D. Ala. 2015).
2Winter v. DC Comics, 69 P.3d 473, 480 (Cal. 2003).
3Noriega v. Activision/Blizzard, Inc., 42 Media L. Rep. 2740 (Cal. Sup. Ct. 2014).
4ETW Corp. v. Jireh Publ’g, Inc., 332 F.3d 915, 936-38 (6th Cir. 2003).
[View source.]