Meta said an internal AI agent gave an engineer instructions that inadvertently exposed sensitive user and company data to employees for about two hours, triggering a major security alert. The company said no user data was mishandled and compared the bad guidance to an error a human might make. The episode underscores growing operational risks as tech firms rush to deploy “agentic” AI across workflows. Recent incidents at other companies, including outages tied to Amazon’s internal AI tools, have intensified scrutiny. Security experts said AI agents lack the institutional context human engineers accumulate, making them prone to consequential mistakes when instructions fall outside their working memory. Investors have grown jittery as companies weigh productivity gains against new failure modes and potential compliance and reputational risks.
Related articles:
OWASP Top 10 for LLM Applications
NIST AI Risk Management Framework: Guidance for Managing AI Risks





























