Nvidia-backed startup Starcloud said it has operated Google’s open-source Gemma language model on an H100 GPU aboard its Starcloud-1 satellite, in what it calls the first high-powered LLM run in orbit. The demonstration, along with on-orbit training of a NanoGPT model, is intended to validate the feasibility of space-based data centers as terrestrial facilities strain power grids and water supplies. Starcloud plans a 5-gigawatt orbital complex powered by uninterrupted solar energy and aims to add multiple H100s and Nvidia’s Blackwell platform to a second satellite in 2026.
The company is targeting commercial and defense use cases such as real-time analysis of satellite imagery for wildfire and maritime rescue alerts. The pitch: energy costs an order of magnitude lower than on Earth and no weather or day-night disruptions. Risks remain, including radiation, debris, maintenance, and data-governance hurdles, analysts say. Tech heavyweights are circling the concept—Google announced Project Suncatcher for space-based compute, while rivals pursue lunar and orbital data hubs—setting up a race to shift parts of AI infrastructure off-planet.
Related article:





























