Artificial-intelligence chatbots can endanger patients when used for medical advice, according to a study led by researchers at the University of Oxford and published in Nature Medicine. In an experiment with nearly 1,300 participants using symptom scenarios, large language models delivered a mix of accurate and misleading guidance that users struggled to parse, the team found. Co-author Rebecca Payne, a general practitioner, said the findings show AI “isn’t ready to take on the role of the physician,” warning that chatbots can miss urgent red flags and provide incorrect diagnoses. While LLMs perform well on standardized medical knowledge tests, lead author Andrew Bean said real-world interactions remain a challenge, underscoring risks in deploying AI for high-stakes decisions. The authors urged caution and further development before positioning chatbots as frontline tools for patient advice.
Related articles:
Ethics and governance of artificial intelligence for health
Performance of ChatGPT on USMLE: Potential for AI-assisted medical education
AI Risk Management Framework (AI RMF 1.0)





























