Google’s Gemini chatbot identified several Republican senators but no Democrats as violating its hate-speech policies, according to author Wynton Hall, who argues the result exemplifies ideological bias in AI systems. Hall, promoting his new book on the politics of artificial intelligence, shared screen recordings of Gemini’s “deep research” outputs and cited specific examples, including references to GOP positions on transgender issues. Google said Gemini strives to present a range of views and acknowledged that chatbot responses “aren’t always perfect,” adding it is working to reduce bias through testing. The episode taps into a broader debate over whether AI models reflect the political leanings of their creators and training data, the role of Big Tech in shaping political discourse, and calls for greater transparency into model training and safeguards ahead of high-stakes elections.
Related article:





























