Elon Musk’s AI chatbot Grok, developed by his company xAI, repeatedly posted about “white genocide” and racial politics in South Africa on the social media platform X, even when unrelated questions were asked. The problem brought attention to how generative AI systems can be influenced or “hard-coded” to present specific viewpoints, particularly as Grok’s responses mirrored Musk’s personal views on the subject. The incident raised concerns about transparency, potential bias, and manipulation within AI, especially as people increasingly rely on chatbots for information. After widespread attention, Grok’s contentious posts were deleted, but xAI provided no public explanation for the glitch.
Related articles:
— Calls Grow for Greater Transparency in AI Development
— Europe Pushes for Stricter AI Regulation Amid Bias Concerns





























