Key Considerations When Using AI Tools to Draft and File Documents with the USPTO | Sterne, Kessler, Goldstein & Fox P.L.L.C.
Summer Associate Ryan Estatico also contributed to this article.
The use of Artificial Intelligence (AI) tools in practice before the United States Patent and Trademark Office (USPTO) is changing how practitioners prepare and submit documents. The USPTO’s recent Guidance on Use of Artificial Intelligence-Based Tools outlines the implications and responsibilities for practitioners using AI tools in their submissions. This article examines key considerations for practitioners when using AI tools for drafting and filing documents with the USPTO.
Firstly, the guidance emphasizes that AI tools can assist in drafting USPTO submissions, but practitioners must ensure compliance with USPTO rules and policies. While AI technologies can improve efficiency by reducing the time and cost of drafting submissions, practitioners are certifying compliance with all USPTO rules and policies when they sign and submit a document. For example, under 37 CFR 11.18(b), practitioners must ensure all statements are true to their knowledge and that they have conducted a reasonable inquiry under the circumstances.
Rule 11.18(b) applies to tasks such as drafting patent claims, generating technical specifications, and evaluating patentability evidence using AI tools. AI tools may also assist practitioners in submitting evidence for patentability or unpatentability, including identifying evidence and drafting affidavits, petitions, or responses to Office Actions. Despite their advantages, AI tools can sometimes omit or hallucinate facts, produce hard-to-verify reasoning, and struggle with legal nuances. Therefore, practitioners should consider verifying that AI-generated outputs are accurate, technically sound, and compliant with patent law.
Furthermore, the guidance advises careful management of AI tools used for automating processes like populating IDS forms or collecting prior art, to avoid burdening the USPTO with excessive or irrelevant submissions. For example, the guidance counsels that practitioners are obligated to conduct a reasonable inquiry into the contents of an IDS form generated by an AI tool, and failure to review and remove irrelevant information can lead to violations of regulations like 37 CFR 11.18(b).
In short, the guidance counsels that submitting AI-generated work without careful review likely violates 37 CFR 11.18(b), and relying solely on an AI tool’s accuracy is not considered a reasonable inquiry. Practitioners would be wise to carefully review AI-generated work for accuracy, technical soundness, and compliance with patent law, and correct any errors or omissions before signing and submitting documents to the USPTO.
Secondly, beyond assisting with document preparation, AI tools can automate tasks like completing USPTO forms, accessing USPTO information, and uploading documents to USPTO servers. The guidance emphasizes that users must exercise caution when using AI tools to avoid violating USPTO rules and policies related to authorization.
Only authorized individuals—such as applicants, inventors, registered practitioners, or sponsored support staff—may file documents and access USPTO information. For example, to submit documents via platforms like the Patent Center or Trademark Electronic Application System, users must have a USPTO.gov account, which is restricted to natural persons and cannot be obtained or sponsored by AI systems or non-natural entities.
The guidance also stresses the importance of ensuring AI tools comply with federal and state laws, as well as USPTO rules and policies, when accessing and interacting with USPTO IT systems. Practitioners using AI tools are responsible for ensuring the tools do not exceed authorized access. To minimize such risk, practitioners may consider using the USPTO’s bulk data products for permitted data mining efforts.
In summary, to avoid violating USPTO rules and policies practitioners should be cautious about the access they grant to AI tools that use USPTO resources. The guidance also highlights that AI tools do not qualify as authorized users under USPTO regulations. Authorized individuals would be wise to supervise AI tools to minimize risks associated with unauthorized data access.
Lastly, practitioners should be aware that using AI tools in practice before the USPTO can lead to inadvertent disclosure of client-sensitive or confidential information. The guidance emphasizes that practitioners must take reasonable steps to prevent such disclosures and ensure confidentiality obligations under 37 CFR 11.106 are not breached.
Under 37 CFR 11.106(a), “[a] practitioner shall not reveal information relating to the representation of a client unless the client gives informed consent.” Understanding how an AI tool works is essential to protecting confidential information. Practitioners should also be mindful that AI tools may introduce risks if they are developed by third parties, store data in insecure locations, or are subject to restrictive licensing agreements.
For example, when inventors input aspects of their invention into AI systems for tasks like prior art searches or drafting legal documents, the AI systems may store this information. Owners of these AI systems can utilize the data in several ways, such as enhancing their AI models through further training or sharing the data with third parties. This raises concerns about confidentiality breaches, potentially violating practitioners’ obligations under 37 CFR 11.106. If confidential information is used to train AI, there is a risk that this information, or portions of it, could appear in outputs generated by the AI and shared with others.
In summary, using AI tools in USPTO practice poses risks of inadvertently disclosing client-sensitive information. Investigating how an AI tool protects confidential information is critical to ensuring compliance with patent laws. To safeguard a client’s interests from potential breaches of confidentiality, practitioners should assess how these tools handle and store data and ensure that the usage of AI tools aligns with the confidentiality obligations of Rule 11.106(a).