Meta is shifting up to 90% of its privacy and societal risk assessment process for new features and updates on Facebook, Instagram, and WhatsApp from human reviewers to AI-powered automation, according to internal documents. The change will allow product updates and new features to launch faster, with instant risk assessment decisions based on AI evaluation of questionnaires filled out by product teams. While Meta claims only low-risk decisions are automated and that human experts will handle complex or novel cases, former employees worry that this will reduce scrutiny and make it easier for potentially harmful changes to be released. Critics argue that engineers and product managers lack deep privacy expertise, increasing the chances of missed risks, particularly involving youth safety, AI, and content integrity. Regulatory oversight in the EU will remain stricter due to the Digital Services Act. The new automation push aligns with Meta’s broader strategy to move quickly in a highly competitive tech landscape, but it has raised concerns among staff about reduced accountability and potential real-world harms.





























