Meta is recasting its artificial-intelligence ambitions, shifting from its open-source Llama models to a frontier system code-named Avocado that could be proprietary—an about-face that has sown confusion internally and rattled investors seeking clarity on returns. After spending $14.3 billion to recruit Scale AI founder Alexandr Wang and other top researchers, Meta lifted 2025 capital-expenditure guidance to as much as $72 billion, even as its shares lag the broader tech sector and Alphabet.
Avocado’s debut, now expected in early 2026, follows a lackluster reception for Llama 4 and an internal shake-up that saw longtime product chief Chris Cox step back from GenAI oversight. Meta has cut roles in its AI units, including FAIR, and chief AI scientist Yann LeCun departed to launch a startup. New leaders Wang and Nat Friedman have imported a faster, more secretive development culture—“demo, don’t memo”—that sidelines traditional Meta workflows and stresses rapid prototyping.
Competition is intensifying. Google’s Gemini 3, OpenAI’s GPT-5 updates and Anthropic’s Claude Opus 4.5 have raised the bar, while Nvidia continues to dominate the compute underpinning every major model. Meta is supplementing its own infrastructure with CoreWeave and Oracle as it builds the Hyperion data center in Louisiana via a $27 billion JV backed by Blue Owl Capital. Early products have stumbled: the Vibes AI video feed trails OpenAI’s Sora 2.
Zuckerberg, who once touted open-source leadership, has tempered that stance as Llama code informed rivals and Avocado tilts proprietary. With 70-hour workweeks, tighter resource control, and ongoing reorganizations, Meta is betting big that a breakout model—and products to match—can justify soaring AI spend and extend its advantage beyond digital ads.
Related articles:
Meta introduces Llama 3 and expands its open-source AI strategy
Llama 2: An open-source large language model from Meta
Nvidia CEO Jensen Huang outlines AI infrastructure roadmap at GTC





























