The article explores the emergence and impact of “virtual teams” of AI-powered scientists, where large language models (LLMs) are used to create interactive, multi-agent systems that mimic the deliberations and brainstorming processes of real research groups. Researchers at institutions like Stanford, Google, and Shanghai AI Lab have been experimenting with these co-scientist platforms, which can be customized with AI agents playing specialized roles. Through various examples, the article shows that AI teams can rapidly generate new hypotheses, design experiments, and synthesize scientific knowledge, sometimes suggesting innovative solutions and at other times echoing established thinking. While these tools can enhance speed, creativity, and perspective in the research process, experts caution that human oversight and domain knowledge remain crucial, as AI is prone to errors and lacks true intuition. The article concludes with reflections from testers, who find value in the AI’s contributions but note the absence of human spontaneity and creative leaps.





























