Qualcomm said it will enter the data-center AI market with two new accelerator lines, the AI200 shipping in 2026 and the AI250 in 2027, targeting inference workloads with full rack-scale, liquid‑cooled systems. The move pits the mobile-chip giant against Nvidia, which holds more than 90% of the AI chip market, and AMD. Qualcomm touted lower operating costs, 768GB of memory per card and flexible “mix and match” configurations that could include its CPUs, while declining to disclose pricing or chip counts per rack. Shares rose 11% on the announcement. The company is leveraging technology from its smartphone Hexagon NPUs and has a regional deployment pact with Saudi Arabia’s Humain, as cloud providers from Google to Microsoft and Amazon push their own accelerators. McKinsey estimates nearly $6.7 trillion in data-center capex through 2030, underscoring the stakes in AI infrastructure.































