Silicon Valley is bullish on AI agents, but the economics and engineering remain fraught. At back-to-back industry events, executives and engineers from Google, Amazon, Microsoft and Meta warned that deploying fleets of autonomous assistants is expensive and operationally brittle. The biggest pitfall, several said, is indiscriminately funneling every task through large language models—burning “millions of tokens” without clear ROI, as Meibel’s CEO Kevin McGrath put it. Google’s Deep Shah highlighted inference costs as a primary headwind, while Synchtron’s Ravi Bulusu described the interlocking data, platform and workforce dependencies as “chaotic.” The debate also carried a cross-border angle: Shanghai-based ThinkingAI, newly repositioned as an agent-management platform and partnered with MiniMax, argued popular harnesses like OpenClaw are too complex and insecure for enterprise use, even as demand spreads beyond gaming into broader industries. The message for corporate buyers: AI agents may be the “next ChatGPT,” as Nvidia’s Jensen Huang said, but success hinges on disciplined task selection, cost controls, and robust governance—especially before scaling.
Related article:





























