Bipartisan Group of Senators Introduce Bill to Address AI Deepfakes and Protect Intellectual Property – AI: The Washington Report | Mintz – ML Strategies
[co-author: Matthew Tikhonovsky]
- Senators Maria Cantwell (D-WA), Marsha Blackburn (R-TN), and Martin Heinrich (D-NM) introduced the COPIED Act.
- The bill aims to address the rise of AI deepfakes, increase disclosures for AI-generated content, and prohibit the unauthorized use of copyrighted content to train AI models.
-
The bill has received support from key stakeholders in the music, entertainment, and journalism industries, but its ultimate passage through this Congress is unlikely.
On July 11, 2024, Senators Maria Cantwell (D-WA), Marsha Blackburn (R-TN), and Martin Heinrich (D-NM) introduced the Content Origin Protection and Integrity from Edited and Deepfaked Media Act (COPIED Act).
The bill aims to address the rise of deepfakes and limit the unauthorized use of content for the training of AI models and systems. To this end, the bill would direct federal agencies to develop standards for AI-generated content detection, establish AI disclosure requirements for developers and deployers of AI systems, and prohibit the unauthorized use of copyrighted content to train AI models.
The Bill is Consistent with Senate Working Group’s AI Agenda
On May 15, 2024, the Bipartisan Senate AI Working Group released its sweeping AI agenda. As we covered, the policy roadmap identified several AI-related concerns that warrant further action, including concerns around deepfakes, transparency, and intellectual property.
Specifically, the roadmap highlighted that the rise of deepfakes – doctored images or videos that show an individual saying something they did not in fact say – may be used to influence elections by “amplifying disinformation and eroding trust.” In the same vein, the roadmap also underscored the need for “a coherent approach to public-facing transparency requirements” for synthetic content – audio, visual, or textual information that has been generated or significantly altered by AI. Transparency and disclosure requirements for AI-generated content would allow people to distinguish synthetic content from authentic content, mitigating the effects of harmful deepfakes and other doctored digital representations.
Finally, given the concerns voiced by creators in the music, entertainment, and journalism industries around the unauthorized use of their content to train AI models, the AI Working Group called for legislation that addresses “consent and disclosure” issues around the use of content and data for training AI models.
The COPIED Act
The COPIED Act would address several AI-related concerns around deepfakes, transparency, and intellectual property. The bill’s requirements would fall into three broad categories:
- Agency Action on AI Standards
- The bill directs the Under Secretary of Commerce for Standards and Technology to “establish a public-private partnership to facilitate the development of standards regarding content provenance information technologies and the detection of synthetic content and synthetically-modified content.” The proposed bill defines “content provenance information” to mean “state-of-the-art, machine-readable information documenting the origin and history of a piece of digital content, such as an image, a video, audio, or text.”
- The bill also directs the Under Secretary to establish “grant challenges and prizes” to fund and award projects that “detect and label synthetic content” and “develop cybersecurity and other countermeasures to defend against tampering with detection tools.”
- The bill directs the National Institute of Standards and Technology to carry out research programs to foster advances in “technologies for synthetic content and synthetically-modified content detection, watermarking, and content provenance information,” as well as measures to “prevent tampering with such technologies.”
- Requirements for AI Developers and Deployers
- The proposed bill requires developers and deployers of AI systems that generate content to provide users “with the ability to include content provenance information” in the content within two years after the bill becomes law. This requirement also extends to developers and deployers of AI systems that are used to generate covered content – a digital representation of any copyrighted work.
- Prohibited Actions to Protect Creators
- The proposed bill makes it unlawful “for any person to knowingly remove, alter, tamper with, or disable content provenance information.”
- The bill prohibits the use of “covered content with content provenance to either train” an AI model or generate synthetic content without the “express, informed consent of the person who owns the covered content.”
The bill creates two main channels of enforcement. It directs the FTC to enforce the bill’s substance, by clarifying that a violation of the bill would also violate Section 5 of the Federal Trade Commission Act. Secondly, the bill also authorizes state attorneys general to bring actions against individuals. And finally, the bill creates a private right of action where individuals can sue those who violate the bill’s rules.
Conclusion
Since the bill’s introduction, the bill has received broad support from key stakeholders in the music, entertainment, and journalism industries, including SAG-AFTRA, the Recording Academy, and the National Newspaper Association. However, despite this support and the bill’s bipartisan nature, the bill’s passage is far from clear. As we’ve covered, Congressional action on AI is complicated by disagreements about implementation and concerns that AI regulations must strike a delicate balance between promoting innovation, safeguarding free speech, and mitigating the risks inherent in AI.
Time is running out on this Congress’ ability to move any AI-related bill to final action. We will continue to monitor, analyze, and issue reports on the developments related to the COPIED Act and other AI federal activity.
[View source.]