The Ethics and Practicality of AI Assisted Patent Drafting
“Considering some of the unrealistic and fanciful claims some vendors make about their tools, and their limited understanding of the complexities of patent practice, it is absolutely essential that AI tools and solutions be thoroughly vetted.”
Given current and ongoing economic realities, patent practitioners—both in-house and outside counsel—are constantly being asked to do more within existing budgets. Meanwhile, more robust patent applications thick with technical detail are necessary to satisfy courts and patent offices around the world. Working within budgetary constraints without sacrificing quality requires outside the box thinking and use of available tools to streamline as much of the process as possible.
Enter Artificial Intelligence (AI), which is taking the world by storm, and recently garnered the attention of the American Bar Association, which has just announced the creation of a task force that will examine the impact of AI on law practice and the ethical implications of its use for lawyers.
Thanks to ChatGPT, even those with only passing technological prowess are now familiar with the power of AI, and everyone has an opinion as to what AI can and cannot do, and should and should not do. But how do you use the power of AI in the innovation and patent realm without compromising confidentiality? And how do you know whether a particular AI solution is reliable and accurate? And for practitioners, how can you ensure that use of AI technology in patent practice is ethical?
Proceed with Caution
The question about the appropriateness of the use of AI in the patent and innovation communities is not linear or absolute. There are times when the use of AI will be perfectly fine, even wise and necessary, such as with respect to conducting and reviewing searches. Provided that third-party vendors appropriately “fence in” their proprietary AI tools so data and information input to search is siloed and client technical data is not being used to cross-pollinate other searches—a real confidentiality concern—the use of AI searching tools seems perfectly appropriate. In fact, using AI may soon become necessary in order to uncover all the prior art that will be found by patent examiners, who themselves are increasingly using and can be expected to use AI tools.
Notwithstanding, the use of AI by practitioners while engaging in the practice of law must be carefully considered from both a legal and an ethical standpoint. This is particularly the case given the lack of understanding and sophistication of at least some providers who are pushing AI solutions without truly understanding the nature of patent practice. For example, one of the authors of this article was recently approached by a vendor who made various fanciful claims about how AI could revolutionize their work, including the following:
- Drafting patent applications, which normally is a 120-minute task, can be accomplished in just 5 minutes.
- Preparing responses to office actions, which is normally a 90-minute task for a person, can be done in just 3 minutes.
- Responding to client inquiries, which is normally a 15-minute task, can be accomplished in less than a minute.
Anyone with even passing familiarity with patent practice would know that writing a patent application, even for something exceptionally simple, is more than a two-hour task. And responding to an office action takes more than 90 minutes given all the legal and ethical requirements that go into understanding the examiner’s position, appreciating the meaning of the prior art cited, and the need to coordinate arguments across the patent family. Indeed, if drafting a patent application took two hours, and responding to office actions took only 90 minutes, there would be no need to streamline anything.
The ethical concerns presented when using artificial intelligence are not insurmountable but do require practitioners to engage in serious forethought and consultation with clients prior to using these tools. And considering some of the unrealistic and fanciful claims some vendors make about their tools, and their limited understanding of the complexities of patent practice, it is absolutely essential that AI tools and solutions be thoroughly vetted. This includes reading the fine print of any service agreement and even separately contracting for the specific needs of your client rather than accepting a standard service agreement that is offered.
Using AI Responsibly Today
Understanding how any AI solution collects, stores and uses the information provided seems a prerequisite to considering issues like confidentiality. And confidentiality is a big issue, not simply because of the potential—perhaps likely—loss of trade secrets, but because the rules of professional conduct requite a duty of confidentiality and further include a requirement that practitioners adequately protect client information against even inadvertent disclosure (See ABA Model Rule 1.6 and USPTO Rule 11.106). And practically speaking, clients absolutely will want to know how confidential information relating to their innovations is being used, stored and whether it will be leveraged by the AI tool for training purposes.
There is no doubt that AI drafting tools can already provide significant advantages in patent application preparation and prosecution, including rapid generation of content-rich output written in natural language text that is similar to what a human might write. While it will be some time, perhaps quite some time, before AI can replace human drafters, leveraging AI as a tool is possible today. For example, one of the particularly useful advantages AI can provide is with respect to generating portions of an application based on claim language, such as the abstract, summary, and a flowchart, completing forms to be filed with the patent office, and creating office action response shells. There are, however, patent, export, and other legal considerations with using such a tool, including but not limited to public disclosure and confidentiality. This is because often the data with which the AI model is trained, and which may be included in its output, may come from unknown and unknowable sources both within the United States and across the world. And depending on the terms and conditions applicable for the specific AI tool in question, it is entirely possible, if not likely, that the confidential information provided will be used to teach the AI, which will result either consciously or unconsciously in the AI using that information when prompted by others, including competitors.
Of course, there are other important considerations that go beyond loss of previously secret, confidential information. What is the security of the AI tool? What is the location of the server running the tool and will this potentially result in an export violation? And who is capable of claiming the subject matter generated by the tool, or in other words, who is the owner of the output, who is the inventor, and might you be incorporating something into a patent application that is actually owned by a competitor, thereby giving them a claim of ownership over the entire patent family?
Outside counsels should therefore have advance discussions with their clients about whether to use the tool or how to best mitigate risks associated with the use of AI in patent practice, in accordance with applicable rules of professional conduct. For example, ABA Model Rule 1.4(b) and USPTO Rule 11.104(a)(2) require a lawyer to “reasonably consult with the client about the means by which the client’s objectives are to be accomplished”, which per the associated ABA Rule comment, provides that “[i]n some situations — depending on both the importance of the action under consideration and the feasibility of consulting with the client — this duty will require consultation prior to taking action.”
Informed Consent
Truly informed consent prior to using any AI assisted drafting tool as required by the rules—see ABA Model Rule 1.4 and USPTO Rule 11.104—should be based on a full, fair and accurate disclosure of the risks of using the AI tool in the manner planned by the outside counsel. Risks will depend on the situation, but there are several factors to consider. And one big area of concern already alluded to, is whether the AI tool’s terms of use include appropriate confidentiality and security provisions and prohibit use of client data for training the AI model? This is critical to consider because if you use a particular AI solution to craft a portion of a description for a patent application, it is necessary to understand whether that AI tool will internalize the information provided and continue to include that information as part of its corpus that it will learn from and pull from when engaging with future users. And this is not a fabricated issue or concern. For example, with ChatGPT, there is a big difference between the terms of service applicable when one uses the online tool versus when one is using the tool via API. When using ChatGPT online, the terms and conditions specifically allow OpenAI to incorporate any input into the training set used to teach the AI tool moving forward, while the same is not necessarily true with respect to use of ChatGPT via API.
Other concerns that must be addressed if a client is going to truly provide informed consent include, in no particular order:
- Whether the server is located, and data drawn from and communicated to it, solely within the United States;
- Does the AI tool have proper security certifications, such as SOC2?; and
- What portions of the patent application will the AI tool generate, and is there the possibility that the generated output will include third-party copyrighted material, competitor material, “hallucinations”, or material that didn’t originate with the inventors but could be claimed?
Duty to Supervise
Even if the client gives informed consent, the patent attorney or agent still has to carefully manage the use of AI to ensure that they are providing competent representation to the client. See ABA Model Rule 1.1 and USPTO Rule 11.101. At present, AI tools cannot be expected to understand the nuances of recent case law, changes in patent rules, recent competitor publications, or even changes in international patent prosecution. The patent practitioner utilizing the AI tool has to constantly check the accuracy of the output and further augment and edit the output to ensure it meets the requirements of the ever-evolving IP landscape. Failure to check AI output for accuracy recently resulted in fines against attorneys who submitted a court brief with fake citations to the law. See Mata v. Avianca, Inc., No. 22-cv-1461 (PKC), 2023 U.S. Dist. LEXIS 108263 (S.D.N.Y. June 22, 2023)
There is a requirement for managing practitioners to supervise those who work for them, as well as non-attorneys and non-practitioners employed or engaged to facilitate the representation of the client. See ABA Rules 5.1 et seq. and USPTO Rule 11.501 et seq. It seems virtually certain that the obligation for managing practitioners to supervise extends to supervising the output of AI tools, as well as third-party vendors who offer patent drafting services using their own proprietary AI solutions. So, to truly get the informed consent required, clients will want to know, and the ethical rules almost certainly require, a patent attorney or agent to proofread and edit the generated content to mitigate or eliminate the aforementioned risks. Moreover, prior to using AI drafting tools it is important for practitioners to also answer: What will the responsible patent attorney or agent do to ensure the accuracy and reliability of the output of any AI tool prior to incorporation into a patent application or use in an office action response?
Image Source: Deposit Photos
Author: ankabala
Image ID: 650667620