GenAI saves time, creates risk
Legal professionals are rapidly incorporating generative artificial intelligence (GenAI) into their daily practice. In a 2026 legal industry study, 69% of legal professionals reported using GenAI tools such as ChatGPT, Gemini, or Claude in their daily work—a dramatic increase from just 31% the year before.
Why such rapid adoption? Time savings is a key motivation and outcome: 38% of survey respondents reported saving 1-5 hours/week using GenAI, and 14% reported saving 6-10 hours/week. GenAI can help legal professionals in many ways, including:
- Summarizing documents
- Performing basic research
- Pressure-testing arguments
- Tightening language
- Accelerating early drafts
That’s all good. Saving time on routine tasks to focus more on legal strategy is key to running an efficient practice, but California is making one thing clear: efficiency doesn’t trump responsibility.
That message is at the heart of California Senate Bill 574 (SB 574)—a bill that, if enacted, will directly regulate how attorneys use GenAI in their practice and briefing process. Even though the bill is still pending, courts are already enforcing the principles behind it. For attorneys, that means the time to adjust workflows is now.
No crying hallucination: if you cite it, you’ve read it
SB 574’s most important provision is also its simplest: An attorney may not include a citation in a filing unless they have personally read and verified the source—even if that citation came from AI.
That requirement cuts straight to the biggest risk in AI-assisted drafting: fabricated or mischaracterized authority. Generative AI tools can produce citations that look entirely credible, though they either aren’t real (hallucinations), or don’t say what the brief claims they say.
California’s response is direct: there’s no distinction between AI-generated citations and any other citation. If it appears in your brief, you own it—and your signature confirms that you stand behind it.
What SB 574 actually requires
SB 574 doesn’t ban AI. It sets guardrails that formalize what courts already expect. In practical terms, attorneys using AI would be required to:
- Verify accuracy: Take reasonable steps to confirm that any AI-generated content, especially citations, is correct
- Fix hallucinations: Identify and remove false or misleading output—as well as any biased, offensive, or harmful content—before filing
- Protect confidentiality: Avoid entering nonpublic, personal identifying, or client-sensitive information into public AI systems
- Not discriminate: Ensure the use of GenAI does not unlawfully discriminate against a protected class
None of this is entirely new. California attorneys have always been bound by duties of competence and candor and courts have always required accurate citations. What’s changing is the framing. SB 574 ties these long-standing obligations directly to AI and notes the sanctions attorneys can face for violating the policy.
Attorneys are already facing sanctions over AI
It’s easy to look at SB 574 as just something to keep an eye on, but in reality, it reflects how courts are already behaving. Today, the risk of relying on GenAi isn’t just theoretical—it can significantly impact your credibility with the court.
Judges have already begun sanctioning attorneys for briefs that include:
- Citations to nonexistent cases
- Quotes that don’t appear in the cited authority
- Misstatements that originated from AI-generated text
Attorneys’ embarrassing GenAI mishaps are in the news nearly every day (a recent high-profile example: Sullivan & Cromwell’s apology to the court for more than 40 errors), and the penalties are really starting to add up.
If this brings you schadenfreude, check out Damien Charlotin’s global database of AI hallucination cases—which grew from 1353 cases to 1394 between the drafting and posting of this article. In these decisions, courts consistently emphasize one point: the problem isn’t using AI; it’s failing to verify it. SB 574 simply makes that expectation explicit—and easier to enforce.
A shift from ethics to process: check your work
For years, attorneys have operated under broad duties like competence and candor. Those duties still apply, but GenAI is forcing a more structured approach.
SB 574 signals a shift from: “You should check your work” to “You must follow a verifiable process for checking AI-generated work.” That distinction means attorneys, and firms, need to think about how verification actually happens, not just whether it happens. Automated citation hyperlinking technology, such as TypeLaw provides, can be an important first step in that process.
TypeLaw automatically flags and fixes citations to comply with the Bluebook and local rules of court, which can catch both AI errors and human typos. It then hyperlinks all citations to the corresponding authority or record—which verifies that the cited sources exist—and makes it easier for the court to follow your argument. If the platform cannot find a source to link, it flags the citation for further human review.
Where SB 574 fits with existing California AI rules
SB 574 builds on changes already underway in California. In 2025, California adopted Rule 10.430, requiring courts to implement policies governing generative AI use by court staff and judicial officers. Those policies focus on:
- Accuracy and verification
- Bias and fairness
- Confidentiality
- Transparency
At the same time, courts aren’t waiting for SB 574. Individual judges have already issued their own rules around GenAI for the attorneys who appear before them.
For example, a federal bankruptcy judge in California issued a standing order requiring attorneys to disclose when GenAI is used in drafting and certify that all content, including citations, has been independently verified.
The result is a layered framework around generative AI use in California courts.
What attorneys using AI should do now
Attorneys don’t need to stop using generative AI altogether, but it’s time to adjust workflows to be more deliberate about how GenAI is used. A few practical adjustments go a long way:
- Slow down at the citation stage: AI can help find cases, but every citation should be pulled, read, and confirmed in the original source
- Separate drafting from validation: Treat AI-assisted drafting as a preliminary step, not a finished product
- Be mindful of what you paste into Gen AI tools: If the information is confidential or personally identifiable, it doesn’t belong in a public AI interface
- Assume your process may be scrutinized: If a citation is challenged, you may need to show how it was verified. Hyperlinking cites to authority and the record shows cites aren’t hallucinated.
These aren’t new habits, but they are becoming non-negotiable.
Why this matters more for appellate work
For appellate attorneys, the stakes are even higher. Appellate courts rely heavily on precise citations, accurate quotations, and faithful representations of the record and authority when rendering their decisions.
A single incorrect citation can do more than weaken an argument—it can undermine credibility across the entire brief. SB 574 reinforces that reality. It doesn’t raise the standard so much as it makes the existing standard impossible to ignore.
Codifying good practice fundamentals in a GenAI world
SB 574 is still moving through the legislature, but its trajectory reflects a broader trend—both in California and nationally. Courts are sharpening the lines around how GenAI is used to ensure that the fundamentals of good lawyering aren’t lost:
- If you cite it, you’ve read it
- If you rely on it, you’ve verified it
- And if it turns out to be wrong, it’s your name on the brief—not the AI’s.
For attorneys, that translates into a familiar principle, applied in a new context: you are still accountable for every word in your brief—no matter how it was generated.


