Automated AI bots, designed to gather data for training artificial intelligence models, are causing major disruptions to scientific databases and academic journals. These bots, by rapidly and repeatedly accessing resources like image repositories, can overwhelm website servers—sometimes to the point of making sites unusable for human visitors. The incident with the DiscoverLife image repository exemplifies a growing challenge as AI models require vast datasets, but website managers struggle with resource strain and potential data misuse. The situation also raises broader concerns about internet privacy, data usage, and possible regulation.
Related articles:
How China created AI model DeepSeek and shocked the world
AI crawlers are breaking websites, pushing them to block bots





























