Healthcare Law

Big Government Republican Bill Stops States from Reining Bad AI: 10-year license for unfettered harm

In a world where artificial intelligence (AI) is used to make decisions about people’s daily lives, people are concerned about how AI could affect their jobs, their use of their posts, their privacy and their access to quality medical care. AI experts and the general public have warned that the government should take action to improve accountability and protect this rapidly evolving field. The recent Congressional proposals would instead ban state and local governments for a decade from enforcing laws or regulations about AI and similar systems. The proposal would also require the federal government modernize its systems using commercial-systems. The proposed reconciliation bill includes up to $500,000,000 for such purchases through 2035. While the federal government will funnel money to commercial interests to purchase AI systems which will not be required to meet significant safety or accountability standards, state and local governments won’t be able to protect their citizens against any harm caused by these systems or any other AI system affecting people. The 10-year safe harbor granted by Congress makes it unlikely that federal government restrictions will provide the level protections required, and this would take a long time.

States Lead on AI Regulation & Protection

States have taken the lead on efforts to regulate AI and protect people from harmful systems. State legislation varies, but there’s a growing interest in requiring greater transparency and assurances when using AI. Last year, Colorado enacted landmark legislation that establishes consumer protections against discrimination within AI systems. California’s legislature has introduced 23 AI-related bills so far in 2025. Some states have begun to regulate AI use, including requirements for disclosure, while others have incorporated AI protections into consumer protection laws.

States also help enforce new protections. For example, Texas secured a “first-of-its-kind settlement” against an AI healthcare technology company for deceptive claims about the accuracy of its healthcare AI products. New York Attorney General brought legal action against health insurance companies for using an algorithm that led to alleged denials of mental health care. State attorneys general have also released AI guidance, urged federal government to take actions, as well as taken steps to prevent discrimination in such systems, including health care. Governors have taken steps to protect the public from harm by issuing executive orders or other measures. The

bill would slam the brakes on these kinds of state efforts to protect their people from harmful AI, including simple actions that would require basic assurances, disclosure, and accountability. Harmful impact of AI on low-income People, including Medicaid Coverage and Services

In America,

92 millions low-income individuals have their lives impacted by artificial-intelligence (AI)-driven decisions. This includes 72 million low income people who are exposed AI decision-making within Medicaid. These tools are used frequently in Medicaid eligibility and enrolling processes, needs assessments and prior authorization processes for medically needed services. The unprecedentedspeed and scale at which these tools operate and spread tests the limits of existing accountability frameworks. These systems are already causing real harm. AI, including algorithms, is already being used in public benefit programs, with some devastating results

. These tools often use faulty and unreliable information that can lead to an inappropriate loss of benefits. They add black-box eligibility criteria that are not allowed by law. This is especially true for people with disabilities.10Regulation of risk is a normal and needed function of government This is especially true for

people with disabilities

.

Regulation of Risk is a Normal and Needed Function of Government

The call for accountability and regulation of AI is a normal function of government. Regulations ensure that our cars are equipped with seat belts, which protect us from accidents, and that our food is safe. They also ensure our prescriptions have been tested and the risks identified. The federal government and industry are proposing to let technology run wild in “innovation” without regard for those it may harm. AI can cause irreparable harm, especially in the health sector, as denying care can result in permanent injury or death. It is unacceptable to identify harm caused by AI systems, then try to fix them after the harm has happened and move forward. In general, society and government do not allow unfettered experiments on people. This is especially true for those who have not consented to the experiment. This is especially amoral when the majority of those affected are low-income individuals with limited resources and time to fight wrongful decisions and few other options to get needed care. One developer of a Medicaid algorithms said, “you will have to trust me because a bunch smart people determined that this is the right way to do it

.” The algorithm in question contained errors and caused significant damage to Medicaid long-term-care enrollees. We can examine his analogy in a different way to shed some light on the current proposal. In reality, washing machines are regulated in terms of safety, performance and energy efficiency. In fact, AI, algorithms and other forms of automated decisions should be more like washers in that they’re tested and regulated to ensure safety and consumer protection. States must be able create and enforce regulations and protect people when needed.

It is irresponsible to create a license for AI developers to run wild with technology that has already been proven to be harmful. It is also a recipe for disaster to force the government to update their technology in a manner that favors profits and privacy over protection. This new provision creates an environment that is perfect for the proliferation and waste of harmful systems. It also tramples upon the autonomy of states to prevent them from taking any action. There would be less state enforcement to protect the tech industry, which would encourage them to do less to maximize profits at the expense of the government funding and those it is supposed serve. For further information on NHeLP’s work in regards to accountability of algorithmic decision-making systems and automated decision-making systems, please visit our page Fairness In Automated Decision-Making Systems

story originally seen here

Editorial Staff

The American Legal Journal Provides The Latest Legal News From Across The Country To Our Readership Of Attorneys And Other Legal Professionals. Our Mission Is To Keep Our Legal Professionals Up-To-Date, And Well Informed, So They Can Operate At Their Highest Levels.

Leave a Reply