O
The The memos implement Executive Order 14179, “Removing Barriers to American Leadership in Artificial Intelligence,” issued by President Trump on January 23, 2025.
This blog is the first in a two-part series discussing the first memo, M-25-21, which focuses on the use of AI by federal agencies, signaling a significant shift in how federal agencies acquire and integrate AI, and what government contractors–particularly those working in IT, data science, and emerging technologies–need to do now to prepare.[1] Potential Impacts
In terms of intellectual property rights, the memos acknowledge that protections are owed, but do not detail how those protections will be implemented. Contractor Contractors will need to adapt their licensing agreements accordingly.
- Additional solicitation requirements that may require contractors to disclose their use of AI.
- Subject to certain exceptions, the memos call for a public repository for AI information and data, potentially releasing contractor data that is not adequately protected.
- Largely undefined Buy American preferences.
- The memos also create a category of AI that will require additional risk management for “high-impact” applications, as explained below. Contractors will need to be cognizant of whether these higher standards apply to their contract.
- The memos call for agencies to revisit and update their IT policies and IT infrastructure within 270 days, which may impact current contracts.
- Considering the short time frames to implement several required actions, the memos signal that the use and procurement of AI by federal agencies is a priority of the Trump administration, and government contractors should take notice. M To implement these goals and priorities, M-25-21 directs agencies to:
remove barriers to innovation and provide the best value for the taxpayer;
empower AI leaders to accelerate AI adoption; and
ensure that agency use of AI works for the American people.
- The aim of M-25-21 is to provide “guidance to agencies on how to innovate and promote the responsible adoption, use, and continued development of AI, while ensuring appropriate safeguards are in place to protect privacy, civil rights, and civil liberties, and to mitigate any unlawful discrimination, consistent with the AI in Government Act. Additionally, M-25-21 only applies to “new and existing AI that is developed, used, or acquired by or on behalf of covered agencies.” Importantly, M-25-21 is not applicable to AI used as a component of a National Security System, and includes certain exceptions for elements of the Intelligence Community.
- Best Value to the Taxpayer
- In line with the current administration’s initiatives to cut down on government spending, M-25-21 directs federal agencies to:
share resources within an agency and across the government;
“reuse resources that enable AI adoption, such as agency data, models, code, and assessments of AI performance”;
“proactively share across the Federal Government their custom-developed code, whether agency developed or procured, for AI applications in active use” with certain exceptions described below;
- “prioritize sharing AI code, models, and data government-wide, consistent with the Open, Public Electronic and Necessary (OPEN) Government Data Act”; and
- create a public repository to store and maintain AI code as open source for access by any federal agency subject to certain restrictions, e.g., where the agency is prevented from doing so by the contract. M It It Contractor High-impact AI is defined as AI with an “output
- serves as a principal basis for decisions or actions that have a legal, material, binding, or significant effect on rights or safety” and a “high-impact determination is possible whether there is or is not human oversight for the decision or action.” M-25-21 requires that agencies implement minimum risk management practices for high-impact AI, including, among other things, (i) pre-deployment testing, (ii) AI impact assessments, (iii) ongoing monitoring, and (iv) training.
- Additionally, there is an automatic presumption that certain categories of AI uses are deemed high-impact, including, among other things:
- safety-critical functions of critical infrastructure or government facilities, emergency services, fire and life safety systems within structures, food safety mechanisms, or traffic control systems, and other systems controlling physical transit;
physical movements of robots, robotic appendages, vehicles or craft (whether land, sea, air, or underground), or industrial equipment that have the potential to cause significant injury to humans;use of kinetic or non-kinetic measures for attack or active defense in real-world circumstances that could cause significant injury to humans;
transport, safety, design, development, or use of hazardous chemicals or biological agents;
design, construction, or testing of equipment, systems, or public infrastructure that would pose a significant risk to safety if they failed;control of access to, or the security of, government facilities; and
use of biometric identification for one-to-many identification in publicly accessible spaces.
If AI use falls into one of these categories and is presumed to be high-impact, appropriate agency officials must submit written documentation to the agency’s designated Chief AI Officer (CAIO) to rebut the presumption and show that a particular AI use case does not actually meet the definition of high-impact. OMB may request that CAIOs provide such determinations.[that]Competition
M-25-21 also generally references a goal of economic competitiveness. The M-25-21 also calls for contractual terms preventing vendor lock-in, which furthers the concept of competition but also means contractors should not expect exclusivity agreements.
- Required Actions
- The Appendices, located on the last page of each memo linked above, contain tables of all of the required government actions necessitated by the memos. Some highlights from M-25-21 include:
- Retain or designate a Chief AI Officer within 60 days.
- Publicly release a compliance plan within 180 days and then again every two years until 2036.
- Develop a Generative AI policy within 270 days.
- Publicly release an AI use case inventory every year.
- Agencies must revisit and update internal policies on IT infrastructure, data, and cybersecurity within 270 days, which could impact current Contracts.
____________________
The memos rescind and replace prior OMB memos issued by President Biden’s administration addressing the use and acquisition of AI by the federal government, M-24-10: Advancing the Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence, and M-24-18: Advancing the Responsible Acquisition of Artificial Intelligence in Government.

