The article examines the growing use of artificial intelligence, particularly large language models (LLMs), in the scholarly peer review process, highlighting both the enthusiasm and anxiety among scientists and publishers. While AI-driven tools are being used for everything from editing and error-checking to providing feedback and validating references, concerns are rising about confidentiality, reliability, and the risk of undermining the core peer-to-peer evaluation expected by researchers. Some journals and funders ban or severely restrict AI use in peer reviews, while others cautiously pilot new tools. The piece spotlights both the accelerating adoption and deep unease about a future where human judgment in peer review could be marginalized or replaced by automated systems.





























