Who’s Liable for Bad Medical Advice in the Age of ChatGPT?
By Matthew Chun
By now, everyone’s heard of ChatGPT — an artificial intelligence (AI) system by OpenAI that has captivated the world with its ability to process and generate humanlike text in various domains. In the field of medicine, ChatGPT already has been reported to ace the U.S. medical licensing exam, diagnose illnesses, and even outshine human doctors on measures of perceived empathy, raising many questions about how AI will reshape health care as we know it.
But what happens when AI gets things wrong? What are the risks of using generative AI systems like ChatGPT in medical practice, and who is ultimately held responsible for patient harm? This blog post will examine the liability risks for health care providers and AI providers alike as ChatGPT and similar AI models increasingly are used for medical applications.
Liability Risks for Health Care Providers
First, let’s consider the risks for health care providers who rely on AI tools like ChatGPT to treat patients. For these individuals, the possibility of medical malpractice claims and Health Insurance Portability and Accountability Act (HIPAA) violations looms large.
Medical Malpractice
To defend against a medical malpractice claim, a clinician must show that the care they provided met or exceeded an acceptable standard — typically assessed against the care that would provided by a “reasonable, similarly situated professional.” And, as a practical matter, courts often look to clinical practice guidelines and expert testimony to determine exactly what the appropriate standard of care should be.
Unfortunately for health care providers, courts are unlikely to find that other reasonable professionals would rely on the advice of ChatGPT in lieu of their own human judgment. AI models similar to ChatGPT are well known to have issues with accuracy and verifiability, sometimes generating factually incorrect or nonsensical outputs (a phenomenon referred to as hallucination). Therefore, until more reliable AI technologies are produced (and some are indeed on their way), health care providers remain highly likely to be held liable for bad AI-generated medical advice, especially if they should have known better as a professional. For this reason, some experts suggest that health care providers only use ChatGPT for more limited use cases such as medical brainstorming, drafting content to fill in forms, reviewing medical scenarios, summarizing medical narratives, and converting medical jargon into plain language.
HIPAA Violations
Entirely separate from concerns about malpractice, health care providers also need to be aware of the privacy implications of using ChatGPT. Existing versions of ChatGPT are not HIPAA compliant, and there is a risk of a patient’s protected health information being accessible to OpenAI employees or used to train ChatGPT further (although providers can opt out of the latter). Therefore, the use of ChatGPT by health care providers has additional liability risks in clinical settings.
Liability Risks for AI Providers
Moving on from health care providers, let us now consider the potential liability for AI providers themselves. Namely, could OpenAI, as the developer of ChatGPT, be held liable for any bad medical advice that their AI system gives to users?
Regulatory Misconduct
One potential approach to holding AI providers liable for their products is to assert that ChatGPT and similar models are unapproved medical devices that should be regulated by the U.S. Food and Drug Administration (FDA). However, under current law, such an approach would probably be met with little success.
Per Section 201(h) of the Federal Food, Drug, and Cosmetic Act, a medical device is essentially “any instrument, machine, contrivance, implant, in vitro reagent that’s intended to treat, cure, prevent, mitigate, diagnose disease in man” (emphasis added). In other words, intent matters. And in the case of ChatGPT, there is no evidence that ChatGPT was designed to be a medical device. In fact, when asked directly if it is a medical device, ChatGPT vehemently denies:
“No, I am not a medical device. I am an AI language model created by OpenAI called ChatGPT. While I can provide information and answer questions about a wide range of topics, including health and medicine, I am not designed or certified to diagnose, treat, or provide medical advice. My responses are based on the information available up until September 2021 and should not be considered as a substitute for consulting with a qualified healthcare professional. If you have any medical concerns or questions, it is always best to seek advice from a medical expert.”
Medical Misinformation
With regulatory misconduct off the table, another possibility for holding AI providers liable for their products’ bad advice is to bring a claim for the dissemination of medical misinformation. While far from a surefire victory, there is a stronger legal argument to be made here.
As Claudia Haupt of Northeastern University School of Law notes, there is generally broad free speech protection under the First Amendment, shielding individuals who “provide erroneous medical advice outside professional relationships.” However, Haupt suggests that the Federal Trade Commission (FTC) could cast bad AI-generated medical advice as an unfair or deceptive business practice in violation of the FTC Act. She also suggests that FDA could hold software developers responsible if ChatGPT makes false medical claims (although, as noted above, it appears that OpenAI has made clear efforts to avoid this possibility).
While content published by online intermediaries like Google and Twitter is protected from legal claims by Section 230 of the 1996 Communications Decency Act (commonly referred to as “platform immunity”), many believe that ChatGPT would not be. Unlike Google and Twitter, OpenAI does not disclose the sources of third party information used for training ChatGPT, and OpenAI acts as much more than “a passive transmitter of information provided by others” due to ChatGPT’s crafting of unique and individualized responses to user prompts. Thus, patients who are harmed by medical misinformation given to them by ChatGPT may have a valid legal claim against OpenAI under consumer protection law, although this has yet to be tested.
Conclusion
As impressive as ChatGPT is, clinicians and consumer alike should be wary of the potential for harm by relying too heavily on its recommendations. AI is not a substitute for good human judgment, and, for now, there are few options for defending against malpractice claims or holding AI providers accountable for bad AI-generated medical advice.
Illustration created with the assistance of DALL·E 2.