Schlichter v. Kennedy: California Courts Continue to Make Examples of “ChatGPT Lawyers”

Schlichter v. Kennedy: California Courts Continue to Make Examples of “ChatGPT Lawyers”

 

Schlichter v. Kennedy: California Courts Continue to Make Examples of “ChatGPT Lawyers”

Generative AI has become a part of many workflows for a lot of litigators, whether or not they say so out loud. But California’s appellate courts are drawing a very clear line: you can use AI as a tool, but you cannot shift your professional duties onto it—especially when it comes to legal research and citations.

In Schlichter v. Kennedy (Fourth District, Div. Two, Nov. 17, 2025), the Court of Appeal issued a published sanctions order against appellate counsel after finding that his briefs contained multiple fabricated case citations with all the hallmarks of AI “hallucinations.” The court imposed monetary sanctions and referred the matter to the State Bar. For all of the litigators out there, the lesson is simple: AI governance is now part of litigation risk management.

What actually went wrong?

In an appeal arising from a civil dispute, counsel for the appellant filed a petition for writ of supersedeas and an opening brief that cited several cases which, as cited, did not exist. The case names looked real. The reporter citations looked real. The propositions they supposedly supported were the kind you see every day in appellate practice.

When the court attempted to verify those authorities, the volume and page numbers pointed to entirely different cases—or to nothing at all. The court ordered counsel to provide official copies of four specific cases. He responded by producing different cases with the same names but different citations, and those real cases did not say what the brief claimed they said.

That triggered an order to show cause why sanctions should not be imposed for relying on fabricated legal authority. In response, counsel submitted a declaration under penalty of perjury. He admitted the citations were wrong, but characterized the problem as a “clerical error” tied to breakdowns in his office processes. He denied relying on hallucinated output from generative AI, even as he acknowledged using AI tools at least in connection with one of the filings.

After a hearing, the Court of Appeal found his explanations not credible. It concluded that the pattern of bogus citations fit what other courts have described as AI hallucinations: realistic-looking case names and cites that do not correspond to real authorities supporting the propositions offered. The court then issued a published order imposing sanctions.

The court’s message about AI

The court did not sanction the mere use of AI. Instead, it grounded its analysis in familiar rules that predate any large language model:

  • Each point in a brief must be supported, where possible, by citations to legal authority.
  • The Courts of Appeal may sanction unreasonable violations of those rules.
  • By signing a brief, an attorney certifies that the legal contentions are warranted by existing law or a non-frivolous argument for change.

What Schlichter does is connect those long-standing duties to the reality of generative AI. The opinion leans on earlier California decisions such as Noland and Alvarez, which emphasized that attorneys must personally verify that each cited case exists, that the citation is correct, and that the case actually stands for the proposition asserted. That verification obligation, the courts have now said more than once, cannot be delegated to AI or to any other technology.

In Schlichter, the court imposed a $1,750 sanction payable personally by counsel and directed the clerk to notify the State Bar. The referral matters: it signals that misuse of AI in litigation is not just a “tech mistake,” but is treated as a potentially-significant ethical issue.

Why this matters

Schlichter is not simply about one lawyer’s missteps. It is part of an emerging rulebook for AI in California litigation.

First, it emphasizes that how you use AI is far more important in the courtroom than whether you use AI, as its use has become generally accepted as not inherently inconsistent with lawyers’ professional responsibility. Courts are less interested in whether a brief was touched by AI and much more interested in whether the filing is accurate, candid, and competently researched. But when that accuracy, candor, and competence is undermined by the presence of AI hallucinations, judges are attuned to how these tools can go wrong, and they are committed to reducing the rising instances of their misuse.

Second, and relatedly, the case highlights that lawyers should not expect to get off from these instances with a slap on the wrist. A published decision, fine of more than $1,000, and a State Bar referral are significant on their own, but they can also create substantial collateral consequences in other matters, from availability and pricing of insurance to how courts view a lawyer’s future representations. For corporate clients, that translates into reputational and strategic risk if key matters are being handled by lawyers who lack sound AI practices.

Finally, Schlichter pushes AI governance into the relationship between in-house teams and outside counsel. Many companies already have policies about not feeding confidential information into public AI tools. Fewer have addressed how their outside litigators are allowed to use AI when drafting documents that will be filed with courts and regulators.

Practical questions to ask now

You do not need to ban AI to stay safe after Schlichter. But you do need to treat it as a high-risk tool that requires controls. In practice, that means asking:

  • How are our outside counsel using generative AI in appellate and motion practice, and what is their documented process for verifying citations?
  • Do our outside counsel guidelines expressly require human verification of authorities and quotations before anything is filed?
  • Internally, are we building AI tools or workflows that make it too easy for AI-generated text to move straight into court filings without a robust review layer?

Schlichter v. Kennedy is unlikely to be the last California case on AI in litigation. But it amplifies an already clear signal: courts expect the same level of accuracy and candor they always have, and they will not accept “the AI made me do it” as an excuse.