Investors and tech giants are pouring money into “world models,” AI systems trained on video and physics simulations to create interactive, navigable virtual environments. The effort—backed by more than $1 billion for Paris-based AMI Labs and parallel work at Google and Nvidia—aims to accelerate training for robots and autonomous systems, promising safer, faster development than in the real world. Proponents say these models capture cause-and-effect dynamics missing from text-to-image tools, with early demonstrations like DeepMind’s Genie 3 and Runway’s GWM-1. The approach raises questions about data sources, reliability and commercialization timelines, but could reshape how AI is deployed in research labs and industry.
Related articles:
– World Models
– Habitat: A platform for embodied AI research
– Ego4D: An egocentric video dataset




























