Heavy reliance on generative artificial intelligence doesn’t just polish prose—it can reshape what people say. A peer‑reviewed study by researchers from West Coast universities, including a lead author now at Google DeepMind, found that participants who used large language models to generate more than 40% of their text produced markedly different essays than those who used little or no AI. On a prompt about whether money buys happiness, heavy AI users delivered neutral answers 69% more often and wrote in a less personal, more formal voice, using 50% fewer pronouns.
The team tested three leading systems—Anthropic’s Claude 3.5 Haiku, OpenAI’s GPT‑5 Mini and Google’s Gemini 2.5 Flash—and reported that AI revisions replaced far more of the original wording than human editors, altering meaning and erasing individual “lexical fingerprints.” Despite rating their outputs as similarly satisfying, heavy AI users said the work felt less creative and less like their own. The findings, accepted for an AI‑conference workshop, raise concerns that AI optimization and human‑feedback training may push writing toward safest‑common‑denominator language, with implications for scholarship, peer review and public discourse.
Related articles:
— Experimental Evidence on the Productivity Effects of Generative Artificial Intelligence
— ChatGPT listed as author on research papers: many scientists disapprove





























