Intelectual Property (IP)

No AI FRAUD Act Would Create IP Rights to Prevent Voice and Likeness Misappropriation

“Generally, the No AI FRAUD Act would recognize that every individual has a property right in their own likeness and voice, akin to other forms of IP rights.”

Today, U.S. Representatives María Elvira Salazar (R-FL) and Madeleine Dean (D-PA) introduced the No Artificial Intelligence Fake Replicas And Unauthorized Duplications (No AI FRAUD) Act of 2024 to create legal mechanisms by which Americans can prevent unauthorized uses of their likenesses and voices by generative AI platforms. The bill seeks to provide for intellectual property (IP) rights in an individual’s voice and likeness as well as remedies including statutory damages and disgorged profits.

IP Rights in Individual’s Likeness Would Be Transferable, Descendible

The discussion draft recently released by Reps. Salazar and Dean recognizes several recent examples of AI-generated recordings and images that have had damaging effects on the individuals whose likenesses were misappropriated. This includes a pair of separate incidents last October involving movie celebrity Tom Hanks and high school students in Westfield, New Jersey. The bill also cites a recent report by the U.S. Department of Homeland Security finding, among other things, that unauthorized AI-generated deepfakes are commonly created to depict pornography.

Generally, the No AI FRAUD Act would recognize that every individual has a property right in their own likeness and voice, akin to other forms of IP rights. The rights provided by this bill would be both transferable and descendible, and would exist for purposes of commercial exploitation for a period of 10 years after the death of the individual. Individual likeness and voice rights need not be commercially exploited during the individual’s life for the IP rights to survive death under the current version of the draft. Once the 10-year period has expired, the IP rights would terminate after two consecutive years of non-exploitation or the death of all devisees and heirs.

Liability for unauthorized AI-generated reproductions under the No AI FRAUD Act would extend to any person or entity making available a “personalized cloning service,” which the bill defines as any algorithm, software or other technology having the primary purpose of creating digital voice replicas or depictions of particular individuals. Along with generative AI platform providers, liability for likeness and voice misappropriations via AI would arise in anyone publishing or distributing a digital voice replica with knowledge that the use is unauthorized, and anyone materially contributing to either forms of conduct proscribed by the bill.

First Amendment Defense Contemplates Public Interest in Likeness Misappropriations

The draft version of the No AI FRAUD Act would create statutory damages against either forms of conduct prohibited by the bill. Those providing a personalized cloning service generating misappropriated likenesses would pay the greater of $50,000 per violation or actual damages plus the violating party’s profits. Those distributing misappropriations knowing their lack of authorization would similarly pay the greater of $5,000 per violation or actual damages plus the violating party’s profits. Individuals seeking profits would need to provide proof of gross revenues attributable to the unauthorized use, while defendants would have to prove deductible expenses. The No AI FRAUD Act would also provide for punitive damages and attorneys’ fees awardable to injured parties.

Disclaimers regarding unauthorized reproductions or violating uses of personalized cloning services would not serve as a defense for unauthorized simulations of voices or likenesses. However, the bill would establish a First Amendment defense requiring a court to balance IP interests in voice and likeness with the public interest in access to the unauthorized use. Enumerated factors for the court’s consideration include whether the use is commercial, whether a particular individual is necessary for and relevant to the primary expressive purpose of the violating work, or whether the use adversely impacts the economic interests of those owning or licensing voice and likeness rights.

Liability under the No AI FRAUD Act would be limited to non-negligible specific harms suffered by the misappropriated individual. These harms include elevated risk of financial or physical injury, severe emotional distress, or likelihood that the use deceives the public or a court. Any digital depictions or voice replicas that are sexually explicit or include intimate images would constitute per se harm under the bill. The bill creates a four year statute of limitations for pursuing civil actions for alleged violations.

The No AI FRAUD Act has already received support from the Recording Industry Association of America (RIAA) and the Human Artistry Campaign, the latter of which has also backed the introduction of Tennessee state legislation creating similar protections against AI-powered misappropriations.

Image Source: Deposit Photos
Author: lightsource
Image ID: 656781132 

Steve Brachmann image

Steve Brachmann
Steve Brachmann is a graduate of the University at Buffalo School of Law, having earned his Juris Doctor in May 2022 and served as the President of the Intellectual Property […see more]

Story originally seen here

Editorial Staff

The American Legal Journal Provides The Latest Legal News From Across The Country To Our Readership Of Attorneys And Other Legal Professionals. Our Mission Is To Keep Our Legal Professionals Up-To-Date, And Well Informed, So They Can Operate At Their Highest Levels.

The American Legal Journal Favicon

Leave a Reply