Top Stories

We should invest in AI that helps people without lawyers.

We’ve all read the headlines about AI lawyers running amok. Since ChatGPT was launched, phantom lawsuits have appeared in court filings across the country. Judges have responded, meting out sanctions, excoriating counsel, and–more recently–even issuing a flurry of new orders and rules that regulate how litigants can use new AI-based technologies.

But when it comes to lawyers’ use of AI, the solution is not bespoke new rules. As the ABA reminded us recently, the ABA relies on the decades-old regulatory framework of attorney accountability. This time-tested structure was up to the task when, 20 years ago, American attorneys began shipping legal work to professionals from India. That delegation sparked a short-lived ethical panic. No amendments are needed to make it equally capable of addressing the problem of lawyers’ reliance upon AI. But the new AI rules are worse than that. It turns out that the new rules are preventing innovative AI uses that could help millions of Americans who lack counsel. They are also distracting us from a more pressing issue: the need to reform older, longstanding laws that restrict the use of technology for everyone, including courts. In this context, getting the rules right is more important for the future health of the legal system.

AI. Everyone else? You’re on Your Own

It is the dirty secret in American courts that the majority of civil litigants today are self-represented. In fact, the best evidence indicates that in about three quarters of the 20,000,000 civil cases filed each year in American courts, at least one party lacks a legal representative. In most of these cases, an institutional plaintiff (a bank, a landlord, or a debt collector) is pitted against a non-represented individual. Facing highly consequential matters, from evictions to debt collections to family law matters, millions are condemned to navigate byzantine court processes–designed by lawyers, for lawyers–without formal assistance.

Of course, self-represented litigants aren’t entirely alone. Many self-represented litigants are forced to muddle through using the resources they have at their disposal. This means using the internet or ChatGPT. Both are unfortunately full of unreliable information. In the age of AI the American legal system has become increasingly awash with “junk.” As the National Center for State Courts put it recently, there is still hope. The situation in AI is changing, as it is with so many other things. The tools of generative AI are improving rapidly. Even improved tools will run into barriers. Two barriers are preventing generative AI from helping self-represented litigants.

The first is the long-standing rule in every state that only lawyers can practice. These rules are applicable to nonhumans, preventing tech providers – the Legal Zooms of the world – from providing comprehensive assistance to those who need it. UPL rules already limit the ability of tech tools to assist self-represented litigants. UPL rules will continue to limit tech tools as their capabilities increase. Consider a recent court order in North Carolina. It prohibits the use of AI in research for the preparation of a court filing “with the exception of such artificial intelligence embedded in the standard on-line legal research sources Westlaw, Lexis, FastCase and Bloomberg.”

Can you guess how many unrepresented litigants have access to these pricey commercial databases? It would be an understatement to say that not many litigants have access to these expensive commercial databases. This order essentially gives lawyers the greenlight to use generative AI, while tying those without counsel’s hands. Even these rules can have a chilling impact, especially for litigants who are not represented by counsel. Do self-represented litigants need to disclose if their search engine has generative AI capabilities. Will the average person know? What if someone used a generative AI to parse through the legalese that is strewn across court websites? Does that need to be disclosed? The answer to these questions is unclear, which highlights the burdensome and restrictive nature of these knee-jerk policies.

‘Courthouse AI’ as the new frontier of access to justice

What to do? One can easily imagine lawmakers and rulemakers responding by doubling down UPL provisions and preventing OpenAI and other law firms from practicing. Lawyers focused on their bottom lines might applaud that development.

But there may be a better option–and it’s already underway. Courts are incorporating AI into their operations and positioning themselves as authoritative sources of legal information and self help resources. By leveraging generative AI, the courts can ensure that their websites, portals, and conveniently located kiosks provide reliable, actionable, and individually tailored information for self-represented litigants. The newly digitized courts could be the only institutions that can keep self-represented litigants afloat on a sea of junk. Under threat of criminal penalties, the same UPL rules that limit Legal Zooms around the world also prevent courts and courthouse staff from providing self-represented litigants with reliable, actionable, and tailored advice. This restriction, which we call “courthouse-UPL”, is a major obstacle to establishing our nation’s court as a trusted and authoritative source of legal guidance for unrepresented parties. It also limits the digital assistance that courts can provide.

Solving this issue is more challenging, of course, than issuing orders narrowly targeting lawyer brief-writing. We need to update outdated guidelines that have been created by states defining what courts can do. Many of those guidelines speak to an earlier, analog era–and even the more recent ones address the static websites of yesteryear, not the dynamic, interactive tools that generative AI makes possible.

Lacking technical capacity of their own, courts also need to develop AI R&D pipelines, whether via smart procurement or by working with universities and a growing “public interest technology” movement, to learn what works and develop court-hosted tools that are trustworthy, flexible and responsive to litigant needs. We must face the questions of the court’s role, and the “courthouse UPL”, head-on. Courts are neither neutral nor impartial if they choose to restrict rather than facilitate litigant access to AI-based assistance.

Time will tell what a new, digitized civil justice system will look like. It’s clear that the existing attorney regulatory structure is adequate when it comes to AI use by lawyers. No additional guidance is needed. The story is different for self-represented litigants, and the courts who are working to serve them. If we are wise enough to allow it, generative AI can be a real asset for them. Nora Freeman Engstrom, Ernest W. McFarland Professor of Law at Stanford Law School. They co-direct Stanford’s Deborah L. Rhode Center on the Legal Profession.

ABAJournal.com is accepting queries for original, thoughtful, nonpromotional articles and commentary by unpaid contributors to run in the Your Voice section. You can find details and submission guidelines at “Your Submissions Your Voice.”

story originally seen here

Editorial Staff

The American Legal Journal Provides The Latest Legal News From Across The Country To Our Readership Of Attorneys And Other Legal Professionals. Our Mission Is To Keep Our Legal Professionals Up-To-Date, And Well Informed, So They Can Operate At Their Highest Levels.

The American Legal Journal Favicon

Leave a Reply