The Food and Drug Administration (FDA) is beta-testing an artificial intelligence tool, CDRH-GPT, designed to streamline the review process for medical devices such as pacemakers and insulin pumps. Despite its promise to improve efficiency after agency layoffs, the AI system is facing technical difficulties and cannot perform basic functions like document uploads or handling user queries. A second tool, Elsa, has also been rolled out to all FDA staff for routine tasks but similarly struggles with accuracy and reliability. Experts and agency insiders express concern that the agency is pushing AI integration too quickly, potentially compromising safety reviews and raising conflict-of-interest issues. Staff worry that the tools are not ready to support critical regulatory tasks and that the rapid push towards AI could lead to errors and job insecurity among FDA employees.
Related articles:
How FDA Cleared an AI-Powered Imaging Device
The Promise and Challenges of AI in Healthcare
FDA Struggles with Staff Shortages Amid Regulatory Demands
Ethical and Regulatory Implications of AI in Medicine
Ensuring Quality in AI-Powered Healthcare Tools





























