Intelectual Property (IP)

AI will see significant legislative developments in 2024

“Although the federal government did not enact AI legislation this year, the continued introduction of AI bills and the decision of Committees to pass AI bills through markup suggests that we will see some form of AI legislation in 2025, or even further in the future.”

Although artificial intelligence (AI) has been around for decades in some form, the more recent Generative AI (GenAI) boom has brought it back into the limelight. The sudden popularity of ChatGPT and other newer systems has led to GenAI and AI in general entering new industries. With new technological developments comes regulation. This is particularly true in cases of rapid and high-scale technology development that has both the potential for positive growth and impact, as well as the potential for negative consequences if misused–just like AI. In 2024, Washington D.C., at least 45 other states, and the state of California, introduced AI bills. Thirty-one states also adopted resolutions or passed legislation. The federal government also introduced its own comprehensive AI legislation. With this landscape in mind, a representative sample of some of the top legislative AI developments of 2024 includes (1) two federal bills, (2) several bills both enacted and rejected in California, (3) one Colorado bill that is similar to a bill that did not pass in California, and (4) a swathe of enacted bills creating state AI task forces.

Federal Level

In 2024, two bills related to the development and integration of AI were introduced at the federal level–one by the House of Representatives and one by the United States Senate.

H.R. H.R. 6936, Federal Artificial Intelligence Risk Management Act 2024. H.R. 6936 was introduced in January. 6936 would require that the National Institute of Standards and Technology develop guidance for federal agencies in their AI risk management efforts. The NIST Guidelines must include, among other things, standards to reduce the risk of developing AI or using it in federal agencies, cybersecurity tools and strategies for AI use, as well as standards that AI suppliers are required to meet to provide AI to federal agency. H.R. 6936 would also require the Administrator of Federal Procurement Policy to provide draft contract language requiring AI supplies to conform with specific conduct and to provide access data, models, and parameters for sufficient testing, evaluations. verifications and validations. 6936 would also require the Administrator of the Federal Procurement Policy to provide draft contract language requiring AI supplies to conform to specific conduct and to provide access to data, models, and parameters for sufficient testing, evaluation, verification, and validation.

Although H.R. Despite being introduced early in the calendar year, 6936 has not made any significant progress. The bill was referred by the Committee on Oversight and Accountability, and the Committee on Science, Space, and Technology after it was introduced. Despite this, neither the Committees nor the Senate have made any changes or comments to the bill. H.R. H.R. S. 4178, introduced in April, would create the AI Safety Institute at the NIST. S. 4178 aims to establish AI metrics, tools, and standards as well as support the research and development of AI. The AI Safety Institute will be responsible for a number of tasks, such as research and evaluation of AI model safety and the development of voluntary standards to detect synthetic content, prevent privacy rights violations, or ensure transparency of datasets. S. 4178 also establishes an AI testbed program to collaborate between the public sector and private sector on evaluating AI system capabilities, developing tests, and collaborating in general research, development and testing. And finally, S. 4178 would require the Secretary of Commerce, Secretary of State, and Director of the Office of Science and Technology Policy to cooperate and coordinate on forming an alliance to develop international AI standards.

Most recently, in July, the Committee on Commerce, Science, and Transportation passed the bill through markup and will move forward with S. 4178. Given the state of the bill, it can be expected that more will be heard about S. 4178 in 2025.

California

California was one of the more active states in terms of introducing and enacting AI legislation. In September, California announced that Governor Gavin Newsom signed 18 AI laws into law. However, Newsom vetoed one bill, and the California State Senate did not end up passing one bill, both of which garnered significant attention when they were introduced.

AB-2885: Artificial Intelligence. Signed by Newsom, this bill amends California law to provide a uniform definition for “Artificial Intelligence.” Now, California law defines Artificial Intelligence as: “an engineered or machine-based system that varies in its level of autonomy and that can, for explicit or implicit objectives, infer from the input it receives how to generate outputs that can influence physical or virtual environments.” In plain language, California now defines AI as a system that has some sense of autonomy and can generate outputs based on inferences derived from input data.

AB-2013: Artificial Intelligence Training Data Transparency. This bill, signed by Newsom and effective January 1, 2026 requires developers of GenAI services or systems to make certain disclosures about their training data. They must also post a high-level overview of the datasets that were used to develop GenAI on their website, including information like (1) the sources or owners, (2) the number and types of data, (3) a description and classification of the data, (4) if the datasets contain protected intellectual property, (5) whether they were purchased or licensed, (6) if the datasets contain personal information, and (7) whether or not the datasets have been cleaned, processed or modified and the purpose for doing so. Developers of GenAI systems in California will be subject to significant disclosure requirements. However, AB-2013 does not apply to GenAI systems or services with the sole purpose of helping ensure security and integrity, the operation of aircraft in the U.S., or national security, military, or defense.

SB-942: California AI Transparency Act. SB-942, signed by Newsom and coming into effect on January 1, 2026 enacts provisions requiring developers of GenAI systems with over 1 million monthly users or visitors to take actions which help the public distinguish between AI-generated material and non-AI generated materials. The developers must first provide a free AI detector to the public, which allows users to determine if content was created or modified by the developer’s GenAI systems. Additionally, the free AI detection tool must allow users to upload or link content and must provide system provenance data detected within the content (not including any personal provenance data).

Second, these developers must provide users the option of including a clear and conspicuous disclosure in the content that identifies it as AI-generated and that is permanent or difficult to remove. The developers must also include a latent disclosure that includes the developer’s name, the GenAI version that generated or altered content, as well as the time and date. This disclosure must be detectable with the developer’s tool, and it must be permanent or hard to remove. Given the bill’s consistent use of “image, video, or audio content,” SB-942 does not apply to GenAI models that do not output one of these types of content.

SB-1047: Safe and Secure Innovation for Frontier Artificial Intelligence Models Act. SB-1047 was designed to prevent certain “critical harms”, such as those associated with chemical, biological, nuclear, or radiological weapons, from AI systems. The bill only covered AI models that met certain thresholds for computational and training costs. Newsom vetoed SB-1047 because of these thresholds. In a letter penned by Newsom, he stated, “By focusing only on the most expensive and large-scale models, SB 1047 establishes a regulatory framework that could give the public a false sense of security about controlling this fast-moving technology.” Although Newsom recognized the need to mitigate the risk of a “major catastrophe” before one is caused by AI, the current legislation did not meet the necessary balance, and California has yet to enact AI legislation aimed at this specific purpose.

AB-2930: Automated Decision Systems. The California State Senate did not pass AB-2930 during this legislative session. The bill was intended to prevent “algorithmic bias” that could occur in AI models. AB-2930 tried to do this by introducing requirements for both developers of AI systems or processes and those who deploy them. Developers would have had to conduct and provide deployers with an impact assessment prior to deployment and annually thereafter. This information could include the types of personal characteristics the AI system or process will assess. Deployers who deploy AI processes or systems that make “consequential decision” would also have to inform those affected that the AI system was being used, as well as other information about the nature of its use. Although California did not pass this bill, Colorado passed a very similar bill.

Colorado

Colorado enacted three AI bills, one of which is similar to California’s AB-2930. Colorado passed its own bill to address algorithmic discrimination within AI systems. Governor Jarod Polis passed CO SB205 in May. This law, like California AB 2930, aims to prevent “algorithmic” discrimination in AI systems. The law defines algorithmic bias as “any condition where the use of artificial intelligence systems results in an unlawful differential impact or treatment that disfavors individuals or groups of individuals based on their actual or perceived age or color, disability, racial origin, religion, limited proficiency in English, nationality, race, sex or veteran status.” Second, the list of requirements for deployers is even longer. They must implement and maintain a policy and program of risk management, (b), complete impact assessments every year and after major changes to the AI system and (c), provide a statement to consumers who are affected by consequential decision-making informing them about the use of AI. Colorado, Illinois, Indiana; Massachusetts (by executive order); Oregon; Washington and West Virginia are the states that have created an AI task force. The language used to create the task force is different in each state, but at a high-level, these task forces are meant to protect consumers, workers or the general public against the risks of AI. The creation of these task force can be interpreted as a sign of more legislation or executive rulemaking to come in 2025. The federal government did enact AI laws this year. However, the continued introductions of AI bills as well as the decision by Committees to pass AI measures through markup suggest that we will see AI legislation in 2025 and even further into the future. At the same time, although many states did not enact AI legislation, nearly every state introduced AI legislation this year, suggesting that states will continue to evaluate their need for AI legislation in 2025 and beyond.

Image Source: Deposit Photos

Author: sdecoret

Image ID: 242011008

Story originally seen here

Editorial Staff

The American Legal Journal Provides The Latest Legal News From Across The Country To Our Readership Of Attorneys And Other Legal Professionals. Our Mission Is To Keep Our Legal Professionals Up-To-Date, And Well Informed, So They Can Operate At Their Highest Levels.

The American Legal Journal Favicon

Leave a Reply