U.S. judges are increasingly penalizing lawyers who rely on error-prone AI tools, with sanctions escalating as generative systems spread through legal work. A global tally shows more than 1,200 court actions tied to AI-related inaccuracies, including roughly 800 in the U.S., and a recent federal case in Oregon imposed $109,700 in sanctions and costs. High courts in states like Nebraska and Georgia have publicly rebuked attorneys for fictitious citations, while some benches experiment with disclosure rules that require lawyers to label AI-assisted filings. Practitioners warn such mandates may become unworkable as AI features are embedded across mainstream software. Beyond ethics, the technology pressures the billable-hour model, pushing firms toward matter- or task-based pricing even as overreliance on automated drafts risks quality lapses. Law schools are racing to add AI ethics instruction, arguing that competence now includes effective, verified use of these tools. Separately, OpenAI faces a lawsuit alleging unauthorized practice of law after a litigant purportedly relied on ChatGPT for flawed legal guidance, a claim the company calls meritless.
Related articles:
Mata v. Avianca, Inc.
US lawyers fined after citing fake cases generated by ChatGPT





























