A recent study warns that artificial intelligence models can unwittingly pass along both benign and dangerous behaviors to one another during the training process, often without detection. Researchers found that AI models could absorb hidden biases or even extreme ideologies from other models, despite rigorous efforts to filter training data. These findings underscore an urgent challenge for developers and regulators: the technology’s complexity is outstripping the industry’s ability to interpret or control the systems it creates. Experts say such vulnerabilities could be exploited by bad actors and stress the need for added transparency and scientific rigor as AI adoption accelerates.





























