MIT researchers unveiled an AI system that rapidly segments regions of interest in medical images, aiming to cut the time and cost of clinical research. The tool, called MultiverSeg, lets users guide segmentation with clicks and scribbles, then learns from previously segmented images to reduce input—eventually requiring none for new scans. In tests, it surpassed state-of-the-art methods while needing fewer interactions, reaching 90% accuracy with roughly two-thirds the scribbles and three-quarters the clicks of earlier systems. Unlike conventional approaches, it does not require task-specific retraining or large pre-labeled datasets, lowering technical and computational barriers. The research, led by graduate student Hallee Wong with collaborators at MIT and Harvard Medical School/MGH, will be presented at the International Conference on Computer Vision. Backers include Quanta Computer, the National Institutes of Health, and the Massachusetts Life Sciences Center. Potential uses range from accelerating studies of disease progression to improving workflows such as radiation treatment planning.
Related articles:
— Segment Anything
— Segment Anything in Medical Images (MedSAM)
— MONAI Label: A framework for AI-assisted Interactive Labeling of 3D Medical Images
— nnU-Net: Self-adapting Framework for U-Net-Based Medical Image Segmentation
— RSNA Screening Mammography Breast Cancer Detection





























