MIT researchers say they have used generative artificial intelligence to sharpen “wireless vision” systems that map what cameras can’t see, reconstructing the shapes of hidden objects and even entire rooms from reflected Wi‑Fi-like signals. The team’s Wave‑Former model completes 3D object shapes from partial millimeter‑wave reflections, improving accuracy by nearly 20% over prior methods across dozens of everyday items concealed by common materials. A companion system, RISE, uses a single stationary radar and the multipath “ghost” reflections produced by people moving through a space to infer room layouts with roughly double the precision of existing techniques—without deploying mobile scanners or cameras that raise privacy concerns. By training on synthetic datasets that embed the physics of specular reflections and mmWave noise, the approach sidesteps the lack of large wireless datasets. The work, headed by associate professor Fadel Adib and slated for presentation at CVPR, could streamline warehouse verification and make smart‑home robots safer and more useful; it is backed by the National Science Foundation, the MIT Media Lab, and Amazon.





























