Slingshot’s debut study on its mental health chatbot is drawing scrutiny, underscoring how hard it is to judge the safety and efficacy of AI tools in behavioral health. The company says its findings support responsible use, but outside experts question the study’s design, outcome measures, and whether it adequately captures risks such as inappropriate recommendations or crisis handling. The debate lands as employers, payers, and health systems weigh fast-growing demand for low-cost mental health support against thin clinical evidence and evolving rules. Regulators are signaling tougher oversight for AI-enabled tools that veer into diagnosis or treatment claims, while investors press for clearer validation and guardrails. The episode highlights a broader reality: in mental health, where stakes are high and standards vary, credible, independent evaluation of AI is becoming a prerequisite for adoption.
Related articles:
— Ethics and governance of artificial intelligence for health
— Towards Expert-Level Medical Question Answering with Large Language Models
— AI Risk Management Framework (NIST)





























