Intelectual Property (IP)

FTC Launches Investigation into OpenAI | Rothwell, Figg, Ernst & Manbeck, P.C.

Following its many warnings of impending enforcement action against entities providing Artificial Intelligence (“AI”) products, the FTC has officially launched an investigation into OpenAI[1]. The FTC initiates its investigation on the heels of the Center for AI and Digital Policy’s July 10, 2023 supplement to its March 30, 2023 complaint, which requests that the FTC investigate OpenAI. The FTC’s investigation of OpenAI also follows multiple civil class action lawsuits filed against OpenAI in the past month alleging numerous privacy and intellectual property violations.

According to the Civil Investigative Demand (“CID”) sent to OpenAI, the FTC is focused on whether OpenAI has (1) engaged in unfair or deceptive privacy and data security practices or (2) engages in unfair or deceptive practices relating to risks of harm to consumers[2]. Pursuant to these concerns, the 20-page demand sets forth numerous interrogatories and document requests directed toward almost every aspect of OpenAI’s business.

Specifically, among other inquiries, the CID requests information about how OpenAI handles personal information at various points in the development and deployment of its Large Language Models (“LLMs”) and LLM Products (i.e., ChatGPT). For example, the FTC is concerned with whether OpenAI removes, filters, or anonymizes personal information appearing in training data. Similarly, the FTC requests that OpenAI explain how it mitigates the risk of its LLM Products generating outputs containing personal information.

Generally speaking, the CID asks about OpenAI’s policies and procedures for disclosing, identifying, and mitigating risks. Not surprisingly, the FTC’s investigation includes questions about OpenAI’s response to the publicly disclosed March 20, 2023 data breach and additional inquiries concerning OpenAI’s awareness of any other data security vulnerabilities or “prompt injection” attacks. Relatedly, the CID probes into OpenAI’s collection and retention of personal information, reflecting the exact “data minimization” principles that the FTC has previously emphasized in numerous enforcement actions.

Also, the interrogatories inquire into OpenAI’s policing of third-party use of its Application Programming Interface (“API”). Specifically, the FTC requests information concerning how OpenAI restricts third-parties from using the API and any required technical or organizational controls that third parties with access to the API must implement. These inquiries suggest that the FTC seeks to hold OpenAI accountable not only for its own use of its LLMs but also for third-party use of its LLMs. Interestingly, the CID comes less than a week after OpenAI announced that it would make the GPT-4 API generally available to all paying customers[3].

While the FTC’s investigation is a first for generative AI products, its demands are indicative of themes consistent with prior enforcement actions—transparency, data minimization, risk identification, and risk mitigation. With many generative AI products at risk until the resolution of the FTC’s investigation, the CID sends a message that AI companies should focus on implementing privacy by design and responsible AI practices centered on transparency and security. Additionally, although collaboration is a significant emphasis in the development of GAI technologies, businesses should be mindful as to whom and to what extent they provide third-party access to their LLMs.

[1] See

[2]

[3]

Story originally seen here

Editorial Staff

The American Legal Journal Provides The Latest Legal News From Across The Country To Our Readership Of Attorneys And Other Legal Professionals. Our Mission Is To Keep Our Legal Professionals Up-To-Date, And Well Informed, So They Can Operate At Their Highest Levels.

The American Legal Journal Favicon

Leave a Reply