Universities from Beijing to Columbus are racing to make generative AI a core part of campus life, even as faculty debate whether the technology enhances learning or erodes critical thinking. Tsinghua University greets freshmen with an AI concierge, Ohio State is requiring “AI fluency” courses, and the University of Sydney is tightening in-person assessments. Surveys indicate widespread student adoption—86% used AI regularly in 2024—while faculty uptake lags and institutional guidance remains patchy. Australia has pushed national guidelines; China has folded AI into state strategy; and UNESCO warns policy is falling behind practice.
Tech firms are moving in: California State University rolled out ChatGPT Edu systemwide, and Google is offering advanced AI tools to students for free. Some campuses are building their own systems to reduce vendor lock-in—Sydney’s Cogniti and Tsinghua’s multi-model architecture aim to curb hallucinations and tailor course agents. Early evidence is mixed: a Harvard randomized trial found AI tutors can accelerate learning, but new research in China suggests gains may fade, and an MIT preprint links LLM-assisted writing to reduced brain connectivity. The result is a scramble to set guardrails, redesign assessments, and gather hard data on what AI does—and doesn’t do—for student learning.
Related article:
Do LLMs Short-Circuit Student Thinking? Evidence from EEG (arXiv)





























