YouTube plans to make cracking down on low-quality AI videos and deepfakes a central focus in 2026, CEO Neal Mohan said in an annual letter. As generative tools flood platforms with “AI slop,” the Google-owned site is expanding labeling requirements for altered content, strengthening systems that detect harmful synthetic media, and rolling out “likeness detection” to flag unauthorized use of creators’ faces. Mohan framed the moment as an inflection point for creativity and technology, saying YouTube will use AI to assist—rather than replace—creators. The company is also pushing new AI features for Shorts, games and music, while emphasizing child-safety controls and additional monetization avenues. The stance comes as YouTube seeks to protect users and advertisers, maintain creator trust and capitalize on AI-driven engagement—after paying more than $100 billion to creators and partners since 2021.
Related articles:
Coalition for Content Provenance and Authenticity (C2PA)
Content Authenticity Initiative





























