The article investigates California colleges’ growing reliance on costly plagiarism and AI-detection software, especially Turnitin, despite concerns over its accuracy, high costs, and student privacy. Since the rise of generative AI tools like ChatGPT, colleges have increasingly sought to police academic integrity, spending millions on detectors that often yield false positives and collect student work for corporate databases. These tools can create anxiety and confusion for honest students, flag non-native English speakers unfairly, and blur the lines of what constitutes legitimate use of technology in writing. Critics argue faculty and institutions should focus on building trust, clarifying AI policies, and training rather than resorting to faulty surveillance tools. The article highlights that despite mounting evidence of limited effectiveness and growing backlash, most colleges continue to renew their contracts, prioritizing deterrence over nuance and potentially undermining both student trust and intellectual property rights.































