Image

Shayan v. Shakib: California Lawyers Continue to Hallucinate, and the Court of Appeal Doesn’t Care How

Just two weeks ago, we posted a summary of what was then the latest in a growing series of appellate cases dealing with AI-hallucinated citations. Already, however, it has lost its novelty. Earlier this week, Shayan v. Shakib became the most recent reminder of California lawyers’ responsibility for the accuracy of the material that they submit to the courts. It is also a wake-up call for anyone who thought simply denying that AI was the culprit would somehow mitigate the damage of presenting fake cites.

A Brief Filled With Quotes That Never Existed

The case itself is a straightforward business dispute. But the Court of Appeal never reached the merits because something far simpler stopped the entire process: the appellant’s opening brief contained several quotations attributed to California cases that simply aren’t real.

Respondent moved to strike the brief and dismiss the appeal. Their argument was that the lawyer relied on AI tools that hallucinated these quotes and even pulled a transcript excerpt from the wrong case.

When confronted, the attorney didn’t deny the errors. Instead, he said these were “clerical citation mistakes”—the result of his staff forgetting to replace a few paraphrased placeholders with actual case language. He denied the involvement of AI.

The court didn’t seem to buy it for a second. More importantly, though, it made clear that it couldn’t matter less how the errors happened.

The Court’s Point: You Chose the Workflow, You Own the Consequences

The justices started with a basic rule every litigator learns early in their career:
You don’t misquote the law. Period.

Whether the bad quotes came from AI, a rushed associate, or a sloppy editing process didn’t matter. California courts have made their position very clear in recent decisions like People v. Alvarez and Noland v. Land of the Free, L.P.: if you file a brief containing fabricated authority, you are responsible for the misrepresentation.

In Shayan, the court walked through the types of inaccuracies it found—stitched-together quotes that never appeared in the cited case, paraphrases disguised as direct quotations, and even language about issues the underlying opinions never discussed.

Some of these errors didn’t even help the attorney’s argument. But that’s not the point. The court emphasized the systemic harm: if fake quotes make it into the briefing ecosystem, they get repeated, cited, and eventually accepted as “law” by someone who doesn’t double-check.

Sanctions: Not Fatal, But Serious

The court didn’t dismiss the appeal outright, but the sanctions were still significant:

  • $7,500 payable to the court,
  • The opening brief stricken, with a corrected version required, and
  • A State Bar referral.

The message here is plain and consistent with the Court of Appeal’s previous decisions on AI-generated inaccuracies: California courts are no longer giving the benefit of the doubt—or a slap on the wrist—when legal arguments are backed by unreliable authority.

Why Clients and In-House Teams Should Pay Attention

At first glance, this looks like a lawyer problem. It isn’t.

This is now a business-risk issue tied directly to the quality of your litigation strategy and your selection of litigators. The Court did not dismiss the appeal this time, but make no mistake: client outcomes are being placed directly at risk every time this phenomenon recurs.

Key Takeaways

  • AI can assist, but cannot replace verification.
    Courts won’t tolerate briefs containing AI hallucinations, and no amount of finger-pointing at software will fix it.
  • Your company’s reputation gets dragged in too.
    A sanctions order like this affects credibility—not just for the lawyer, but for the client behind the filing.

What Companies Should Do Right Now

AI is already part of litigation, whether anyone says it out loud. The question is whether your company has guardrails in place to control the risks.

Two Immediate Steps

  • Update outside counsel guidelines to require disclosure of AI use in research and drafting, and insist on human review before anything goes to court.
  • Set internal protocols so your in-house team doesn’t accidentally send AI-generated text into a filing or regulatory submission without a second set of eyes.

Shayan v. Shakib won’t be the last case where a California court confronts AI hallucinations, but it’s already one of the clearest. The takeaway is simple: AI can speed up drafting, but it cannot replace judgment, accuracy, or professional responsibility. Companies should expect their attorneys—and their internal teams—to build workflows that reflect that reality.