The article discusses a report from the California Department of Technology stating that nearly 200 state agencies reported no use of high-risk automated decision-making systems, despite evidence to the contrary, such as algorithms affecting criminal justice and unemployment benefits. Agencies are required by 2023 state law to disclose use of high-risk AI, but the report’s findings contradict legislative analyses projecting massive costs for governing such systems. Experts and advocates question the consistency and definitions used, highlighting ongoing and future deployments of AI in state government. The debate continues as the Legislature considers new AI regulations, amid calls for greater oversight and transparency.
Related articles:
California agencies face calls for more AI transparency
Glossary: Key terms in California’s proposed AI laws





























