Alaska’s court system spent more than a year building an AI probate assistant intended to guide residents through complex forms, only to find that reliability concerns and “hallucinations” complicated deployment and stretched timelines. The Alaska Virtual Assistant, developed with the National Center for State Courts and legal-tech entrepreneur Tom Martin, repeatedly produced inaccurate or extraneous guidance—an unacceptable risk for users handling estates—forcing the team to rework model choices, tone, prompts, and evaluation tests.
Administrators scaled back ambitions, tightened the chatbot’s knowledge base, and created a leaner 16-question accuracy test. Costs per query are low, but shifting model behavior and version changes mean ongoing oversight will be required. The episode underscores the gap between AI hype and public-sector reality: in high-stakes settings such as courts, officials say near-perfect accuracy and human review remain essential, even as agencies weigh AI’s promise for efficiency against the risks of bad advice. A limited launch is slated for late January.
Related article:





























