Google used its Cloud Next ’26 conference to press its lead in enterprise AI, unveiling an agent-focused Gemini Enterprise Agent Platform and the eighth generation of custom AI chips. CEO Sundar Pichai said Google’s first-party models now process more than 16 billion tokens per minute via direct APIs, with just over half of 2026 ML compute investment earmarked for Cloud. The new TPU 8 lineup splits into training-focused 8t—scaling to 9,600 chips and 2 petabytes of shared memory, with triple the processing power of Ironwood and up to 2x performance per watt—and inference-focused 8i, linking 1,152 chips with 3x on‑chip SRAM to cut latency for running millions of agents concurrently. Security took center stage with agentic threat-detection offerings combining Google Threat Intelligence and Security Operations with Wiz’s AI security platform and a new AI Application Protection Platform. Internally, Google said 75% of new code is now AI-generated and approved by engineers, with agentic workflows accelerating complex code migrations sixfold; security operations trimmed mitigation time by over 90%; and marketing campaigns saw 70% faster asset creation and a 20% boost in conversions. The announcements underscore Google Cloud’s push to industrialize “agentic” AI across infrastructure, security, and development while positioning its chips and platforms against rivals in a rapidly scaling market.





























