Meta suffered two courtroom defeats this week that turned, in part, on its own internal research, underscoring rising legal and reputational risks for Big Tech as it races to deploy AI. Juries in New Mexico and Los Angeles found the company failed to adequately police its platforms and expose known harms to young users, after reviewing executive emails, internal surveys and studies—some indicating teens experienced unwanted sexual advances on Instagram and that reduced Facebook use correlated with lower anxiety and depression. Meta and Google’s YouTube, also named in the L.A. case, plan to appeal. The verdicts spotlight a dilemma for tech firms: investing in rigorous research can strengthen safety practices but also create discoverable evidence that heightens liability. Industry veterans warn that a post-Haugen pullback in internal safety work—and limits on third‑party access—could chill scrutiny just as AI tools proliferate. Regulators are expected to intensify oversight, raising compliance costs and forcing greater transparency around product impacts.





























