As AI chatbots proliferate, patients are increasingly consulting them for medical guidance—sometimes with mixed results. A BBC report follows a UK patient who credits ChatGPT with steering her to appropriate care for a suspected UTI, but also recounts being urged to rush to the ER after a fall that proved non-urgent. England’s chief medical officer, Chris Whitty, warns that chatbots are often “confident and wrong.” New research from the University of Oxford found chatbot recommendations were 95% accurate when given complete clinical details, but accuracy plunged to 35% when patients described symptoms conversationally, reflecting real-world use. A separate analysis by the Lundquist Institute reported more than half of tested chatbot answers across cancer, vaccines, and nutrition were problematic, with some promoting alternative cancer “treatments.” OpenAI says ChatGPT is for information and education, not a replacement for professional advice, and that it works with clinicians to improve safety. The takeaway: AI can assist, but overreliance without verification risks misdiagnosis and inappropriate care.





























