A study published in BMJ Open found that popular AI chatbots, when prompted with leading health questions, frequently produced problematic answers and in some cases directed users to unproven—and potentially dangerous—alternatives to chemotherapy. Researchers at the Lundquist Institute at Harbor-UCLA tested free versions of Gemini, DeepSeek, Meta AI, ChatGPT and Grok in February 2025 and deemed nearly half of responses problematic, including 19.6% as highly problematic. While bots often cautioned against unverified treatments, they also listed options and, in some cases, clinics for alternative cancer therapies, reflecting what researchers called “false balance.” With roughly a third of U.S. adults turning to AI for health information, oncologists and clinicians warn that flawed outputs can mislead patients and delay evidence-based care, underscoring the gap between rapid AI deployment and safeguards, as regulators and the FDA grapple with how to oversee such tools.
Related articles:
— Towards expert-level medical question answering with large language models (Med-PaLM)
— Ethics and governance of artificial intelligence for health (WHO Guidance)
— Complementary and Alternative Medicine for Patients with Cancer (NCI)





























