Microsoft introduced the Maia 200, a second-generation AI accelerator built on TSMC’s 3-nanometer process, as it seeks to reduce reliance on third-party chips and sharpen Azure’s cost and performance profile. The company says Maia 200 delivers about 30% higher performance at the same price as competing offerings, with four chips per server and large-scale deployments linking up to 6,144 accelerators over Ethernet. Initial rollout is underway in U.S. regions, with Microsoft’s internal AI organizations—including the team led by Mustafa Suleyman—plus Microsoft 365 Copilot and Foundry slated as early users. The move underscores a broader cloud trend toward custom silicon as providers chase generative-AI demand while trying to rein in energy use and capital intensity.
Related article:
— Microsoft launches Azure AI Foundry for building and managing AI apps





























