Intelectual Property (IP)

The NTIA Report & California AI Bill: New Era of Open Model Governance

The California legislature recently passed SB 1047, which awaits the governor’s signature or veto and could also have a significant impact on open models. The California legislature recently passed SB 1047, which awaits the governor’s signature or veto and could also have significant impact on open models.

Regulators are particularly interested in models trained using exceptional computing power–with the EU focused on those trained with 1025 floating-point operations, which cover a few current frontier models, while California and the U.S. federal government have keyed in on future models trained using 1026 operations.

  • Companies should pay attention to best practices outlined by the National Institute of Standards and Technology (NIST), as the California legislation incorporates NIST’s guidance by reference
  • Open models play a crucial role in fostering a diverse and innovative AI ecosystem. The government is concerned that open models’ irrevocable accessibility can make it difficult to prevent downstream misuse. President Biden’s 2023 Executive Order on AI (Biden EO) defines powerful open models that have tens of billions of parameters as “dual-use foundation models with widely available weights.” To evaluate the risks of those open models, the Biden EO mandated the Department of Commerce solicit comments from the public and submit a report with regulatory recommendations.
  • This potential restriction on open models triggered significant reactions from the AI community, resulting in over 330 public comments submitted to the NTIA. OpenAI, Google and Microsoft, Meta, Anthropic and IBM, Cohere and Stability AI, EleutherAI and Cohere submitted extensive opinions about open models’ benefits, risks and regulations. On July 30, 2024, the NTIA released a report recommending further evidence collection before implementing any regulations on open models.
  • Meanwhile, the California legislature separately passed an AI safety bill, SB 1047 (CA AI Bill), on August 29, 2024, that currently awaits Gov. Gavin Newsom will either sign or veto the bill. The legislation is not specifically directed at open models, but the bill could significantly impact the public dissemination of model weights in an open ecosystem, as it focuses on model developers’ liability instead of downstream applications.

The Risk and Benefit Considerations

Due to insufficient downstream control, some people consider models with widely available weights to present more safety risks than more-powerful closed foundation models. It is almost impossible to retract open models and their weights once they are released. Guardrails built into models can be removed or circumvented after release. These exploits potentially enable harmful uses by malicious parties out of the original model developers’ control.

Notwithstanding such concerns, many public comments to the NTIA overwhelmingly support an open model ecosystem. These comments argue that evidence and data don’t substantiate these concerns. In its report, the NTIA weighed the marginal risk added by open models’ accessibility and ease of distribution against the open model ecosystem’s significant benefits. The NTIA’s balanced approach: Evidence Before Action

By considering the risks and benefits associated with open models, NTIA recommended a monitor-based approach that would not only prevent immediate regulation, but also preserve the future option of restricting the access to powerful open models. The report outlines a three-step process: Collect evidence, evaluate it, and act on findings. The evidence may lead to regulatory actions such as restrictions on access. The NTIA report stresses the need for flexibility as AI technology evolves, aiming to balance fostering innovation and safeguarding against the marginal risks that open models pose.

The NTIA-recommended approach to collect and evaluate evidence resonates with a reporting requirement set forth in the Biden EO, which mandates companies that train models using a quantity of computing power greater than 1026 floating-point operations (FLOPs) provide reports to the federal government.

California’s Sweeping Approach

Unlike the federal reporting requirements, the CA AI Bill seeks to impose additional regulations on certain models that meet a regulatory threshold. The CA AI Bill defines “covered models” as AI models that have been trained with more than 1026 floating-point operations (FLOPs). See SS 22603(a).

The CA AI Bill’s legislative approach appears to contrast with the federal government’s approach, which recognizes the benefits of open models and prefers collecting evidence before taking regulatory actions. See SS 22603(a).

The CA AI Bill’s legislative approach appears to contrast with the federal government’s approach, which recognizes the benefits of open models and prefers collecting evidence before taking regulatory actions.

Understanding the FLOPs Regulatory Threshold

The highly technical number 1026 FLOPs is best understood by comparing the U.S. regulatory threshold to the European Union regulatory threshold. Both the Biden EO as well as the CA AI Bill in the U.S. use the 1026 FLOPs threshold. Comparatively, the EU AI Act identifies models with 1025 FLOPs or less as posing a systemic risk. While 1025 and 1006 may seem to be similar, the difference of one order of magnitude reflects an important divergence in regulatory ideology. The current generation of AI models, when the EU AI Act came into effect in August 2024, are trained to 1025 FLOPs. As of April 2024, no AI model was publicly known to have exceeded 5x1025FLOPs.

April 5, 2004 Source: Epoch AI

Putting the above observations into perspective, the EU AI Act is focused on immediately regulating the current and future frontiers models as the current models have already exceeded the EU’s threshold of 1025 FLOPs. Contrary to this, no model publicly known has yet reached the U.S. threshold. According to the NTIA, the U.S. government is focusing on gathering evidence about future risks of models if companies start training models that are several times larger than the current frontier model. The CA AI Bill and the Open Model Ecosystem

The CA AI Bill requires developers to implement “cybersecurity measures to prevent… unsafe modifications of” covered models, and the capability to quickly enact a ‘full shutdown. See SSSS 222603(a). These obligations seem to have a significant future impact on the open model ecosystem where a model’s weights are widely available to the public.

Before discussing how those requirements may impact different types of models in varying degrees, it is noteworthy that the degree of openness of AI models exists along a spectrum, as shown in the diagram below.

Source: Connected health Initiative’s comment to the NTIA

Future Frontier models “with widely-available weights” are most likely to be affected by the CA AI Bill. Closed models hosted on a model developer’s server are more manageable in terms of preventing “unsafe post-training modification” and implementing “full shutdown”. Open models with widely-available weights can be copied, distributed, hosted locally by third parties, and most likely, freely adjusted. While SS 22602(k), of the CA AI Bill, imposes a requirement for “full shut down” only on models and derivatives “controlled” by a designer, the boundary between “control” and “release” of models to the public may remain unclear. It can also be difficult to define what constitutes a reasonable and sufficient level of protection against “unsafe modifications after training” in an open model that is intended to be modified by the general public. As such, uncertainty exists in putting open models into compliance with the CA AI Bill and, therefore, may hinder the future release of powerful open models.

Towards Standardized Risk Measurement: The Role of NIST

The CA AI Bill highlights the uncertainty and difficulty in measuring model risks and defining safeguards, particularly for open models. These issues, which are both legal and technical in nature, may push the AI industry towards standardized risk measurements. The NTIA report, as well as many public comments made to the NTIA, call for a more scientific approach to AI risks. NIST is likely to play a key role in this regard, especially through its newly formed division U.S. Artificial Intelligence Safety Institute (AISI).

Adhering to the guidance and standards from NIST and AISI will help any AI companies mitigate risks associated with developing or deploying AI. NIST has recently released the Artificial Intelligence Risk Management Framework as well as the Secure Software Development Practices for Generative AI. These frameworks may become de facto regulations for AI developers, even though they are not legally enforced. The CA AI Bill, for example, mandates that model creators and computing cluster operators “shall consider industry-best practices and applicable guidance” from the AISI or NIST. SSSS 22603 (i) and (22604 (b)). The bill also requires California Government Operations Agency to establish regulations for AI model auditing that “shall at a minimum be consistent with” the AISI’s and NIST’s guidance. See proposed change to CA Government Code SS 11547.6(e).

The Debate Between Horizontal and Vertical Approach: Regulating Models or Applications

The NTIA report and the CA AI Bill highlight contrasting approaches to AI regulation. The NTIA report suggests that the federal government might prefer a vertical regulatory approach focusing more on downstream applications than on “interventions on downstream pathways where those risks materialized.” For example, the NTIA report has detailed analyses of high-risk industries related to chemical, biological, radiological, and nuclear (CBRN), cybersecurity, and misinformation/disinformation.

Discussions about open model safety have focused on model developers’ pre-release obligations, such as thorough and scientific red-teaming efforts to stress test the model’s safety. Federal regulators could start to pay attention to deployers who integrate AI tools into end-user products for certain sectors if the scope of regulation is shifted downstream. This places a greater emphasis on the model developers’ responsibilities. The bill prohibits any agreement which transfers liability from the developer to a third party in exchange for the use of the developer’s AI product. See SS 22606 (c)(1). Leaders in the AI industry have criticized the approach of regulating models which can be considered as a general purpose technology. It remains to be seen whether federal regulation may at some point include express preemption to different approaches such as that in the CA AI Bill.

Conclusion

The evolving landscape of AI regulation, as reflected in the NTIA’s cautious approach and California’s more aggressive stance, underscores the ongoing debate over how best to balance innovation with safety. The evolving landscape of AI regulation, as reflected in the NTIA’s cautious approach and California’s more aggressive stance, underscores the ongoing debate over how to best balance innovation with safety.

Story originally seen here

Editorial Staff

The American Legal Journal Provides The Latest Legal News From Across The Country To Our Readership Of Attorneys And Other Legal Professionals. Our Mission Is To Keep Our Legal Professionals Up-To-Date, And Well Informed, So They Can Operate At Their Highest Levels.

The American Legal Journal Favicon

Leave a Reply