AI in Health Care: Regulatory Landscape & Risk Mitigation
Health care, like most industries, is grappling with the proliferation of artificial intelligence (AI) and the novel risks and benefits it presents. Balancing these risks and benefits can be especially challenging for attorneys, as the laws and regulatory regimes that may apply to a given AI technology can vary between technologies and different states and countries. Fortunately, experts from multiple institutions representing health care systems, academia, and government, through the Coalition for Health AI (CHAI), have come together and created a Blueprint which identifies and proposes solutions to issues that must be addressed in order to enable the trustworthy use of AI in health care.
Allowing AI technologies to perform tasks traditionally handled by humans presents unique risks in the health care context. The typical AI risks seen in most business contexts, such as cybersecurity risks and invisible bias in decision making, remain present. However, the risk profile in the health care context is substantially greater due to the potential of an automated decision resulting in a direct negative impact on a human’s survival, causing stress, confusion, pain, suffering, or even loss of life.
Organizations operating in the health care space should conduct a holistic review of each AI technology they wish to implement to determine an overall risk profile. Areas such as ethical implications, potential for public reputation harm, degree of human oversight, potential biases, and potential regulatory concerns should be examined.
Reading the Regulatory Landscape
Rather than a uniform law governing AI technologies, certain aspects of these technologies are governed by a patchwork of laws and regulations. The resulting current regulatory framework is complex and may vary based on several factors including types of data implicated, geographic location, use case, and technical implementation. There are various efforts underway to address the regulatory gap, including the European Union’s Artificial Intelligence Act (which the European Parliament recently approved) and the Blueprint for an AI Bill of Rights published by the Biden Administration in October 2022. While this shows progress toward enacting formal AI regulation, the patchwork remains.
In the absence of formal regulation, regulators and lawyers frequently look to common industry frameworks when evaluating risk, such as those published by the U.S. National Institute of Standards and Technology (NIST). Fortunately for organizations operating in the health care space, there is ample guidance and thought leadership on safely and responsibly utilizing AI technology. In April 2023, CHAI released the first version of its Blueprint for Trustworthy AI Implementation Guidance and Assurance for Health Care. This Blueprint aligns with NIST’s AI Risk Management Framework and provides a simple and informative starting point for health care organizations seeking to implement AI technologies safely and responsibly. Furthermore, the Blueprint can assist attorneys with vetting AI technologies that their clients wish to implement and in mitigating legal risk by providing a starting point for developing an objective and defensible approach for evaluating their clients’ use of AI technologies.
The regulatory environment surrounding AI technologies is complex and constantly evolving. Organizations operating in the health care space should regularly review and reevaluate their use of AI technologies to ensure they are safe, ethical, beneficial to the organization, and do not violate law or industry standards. For assistance, consult legal counsel that keep up-to-date with the state of the law, policy, and technology.
AI in Health Care Series
For additional thinking on how artificial intelligence will change the world of health care, click here to read the other articles in our series.