Inside USPTO’s AI Subject Matter Eligibility Guidance | Fenwick & West LLP
What You Need To Know
- The USPTO has issued new examples of subject matter eligibility related to innovations in Artificial Intelligence.
- These examples generally maintain the current state of examination practice for USPTO examiners and practitioners, but they are not binding for federal courts.
- Companies should consider how to align their patent application specifications and claims to best position their applications to overcome subject matter eligibility rejections.
In accordance with President Biden’s Executive Order on the use of Artificial Intelligence (AI) in October 2023, the U.S. Patent and Trademark Office (USPTO) has issued new subject matter eligibility guidance relating to AI inventions. This guidance is directed to patent examiners and patent applications handling AI-related innovations.
Subject matter eligibility under 35 U.S.C. § 101 has long been a point of confusion—and frustration—for patent applicants as patent examiners inconsistently apply Federal Circuit case law and the USPTO’s existing guidance to reject patent claims. This new guidance attempts to clarify certain points within the USPTO’s eligibility framework. In particular, the USPTO addresses Step 2A, Prong One (Does a patent claim “recite” a judicial exception?) and Step 2A, Prong Two (Does the claim “integrate the judicial exception into a practical application of that exception”?).
Unfortunately, this new guidance is unlikely to provide useful clarity about what types of claims USPTO examiners will find to be patent-eligible. Instead, the new guidance repeats existing language in the MPEP without breaking new ground.
The guidance centers around three new Subject matter eligibility examples covering disparate technical fields. Each example includes at least one ineligible claim and one other eligible claim. In all three examples, eligibility arises under Step 2A, Prong Two, with the eligible claims reciting sufficient post-ML-analysis steps to integrate the abstract idea judicial exceptions into a practical application.
Example 47
This example relates to applying a neural network to monitor malicious network activity.
Claim 1 was deemed eligible, being framed as a hardware product-type claim, namely an “application-specific integrated circuit” comprising neurons and synaptic circuits. Notably, the claim does not recite any method or method-like steps.
Claims 2 and 3 recite methods for detecting malicious activity, but only Claim 3 was deemed eligible. Claim 2 recites leveraging an “artificial neural network” to identify anomalies from a data set, but abruptly ends there. As such, the claim isn’t limited to any particular field of use nor aimed at solving any particular problem—hence, the finding of ineligibility.
In contrast, Claim 3 contextualizes the anomaly detection to network monitoring and further recites additional post-anomaly-detection steps. These steps include determining the anomalous network activity is malicious and performing remedial measures to remove the malicious activity and to stop further incidences. These steps, expectedly, provide a practical application via an improvement to the technical field of network intrusion detection. Importantly, the guidance reasons that the specification needs to describe the asserted improvement.
Example 48
This example relates to speech separation methods leveraging deep neural networks.
Claim 1 was deemed ineligible, with Claims 2 and 3 being found eligible. Claim 1 generally recites taking a mixed speech signal, converting it into a spectrogram with a Fourier transform, and then applying a deep neural network to determine embedding vectors from the spectrogram. Although the claim somewhat limits the field of use to speech signal processing, it stops short of solving any particular technological problem.
Much like the earlier example, Claims 2 and 3 add meat to the bones. In each, the claim includes sufficient post-ML-analysis steps that embody technological improvements. Claim 2 (depending on Claim 1) further recites synthesizing speech waveforms for different audio sources, and combining particular waveforms together to parse out speech from a target source. In a similar vein, Claim 3 recites separating out speech signals from different sources to then decode a transcript of the target source’s singled-out speech signal. Interestingly, the guidance analyzes each claim as a whole, considering both the additional elements of the claim and limitations directed to the abstract idea, when identifying the technological improvement.
Example 49
This example relates to the application of an AI model to diagnose a hypothetical medical condition, with a vague treatment step following a positive diagnosis. Although Claim 1 includes the treatment step, that treatment recited was reasoned to not be particular enough. Claim 2, on the other hand, depends on Claim 1 and limits the “appropriate treatment” to “Compound X eye drops.” Expectedly, Claim 2 passes muster. Specifying the particular compound gets the claim over the hump to be a practical application of administering a particular treatment.
Takeaways
These examples provide a few takeaways for patent applicants who are innovating in the AI space. First, check and double-check the specification to make sure the disclosure sufficiently describes computer product-type devices (for fallback claims inferentially reciting training of neural networks), the technological problems and the solutions achieved by the invention (for innovations related to computer technology), or treatment administration (for innovations related to diagnostics and treatment). Second, strategize the claims to ensure post-ML-analysis steps are numerous and tied to the “practical application” achieved by the innovation.
Unfortunately, the guidance is not the be-all and end-all of Subject matter eligibility practice. Curiously, the abstract idea grouping for certain methods of organizing human activity was unaddressed in the new examples. This poses quite the hurdle for many applicants applying AI/ML to real-world systems with human involvement. There also remains much uncertainty among applicants innovating in more nebulous and nascent AI/ML technologies, which have the hardest time analogizing to these examples. Lastly, whether a claim limitation is characterized as part of the judicial exception or an additional element remains hazy. This will lead, inevitably, to continued inconsistent interpretations and characterizations of claim limitations from examiner to examiner.