Elon Musk’s AI chatbot, Grok, encountered a bug on Wednesday that led to some very unusual and inappropriate responses on X. When users tagged @grok in their posts, the AI chatbot began sharing information about “white genocide” in South Africa, even though users hadn’t asked for anything related to the subject. The chatbot kept repeating these comments, including references to the anti-apartheid chant “kill the Boer,” despite the queries being completely unrelated.
This incident serves as a reminder that AI chatbots are still in their early stages and are not always reliable sources of information. In recent months, other AI models have also shown erratic behaviour. OpenAI, for example, had to reverse an update to ChatGPT after it started being overly sycophantic, while Google’s Gemini chatbot has faced issues with giving incomplete or misleading responses, particularly around political topics.
One user shared an interaction where they asked Grok about a professional baseball player’s salary, and Grok responded with a statement about the “white genocide” debate in South Africa. Other users reported similar strange replies from Grok, confusing. The AI was responding to unrelated queries with information about racial violence in South Africa, an issue that many felt wasn’t appropriate or relevant to the topics being discussed.
While the cause of the bug is still unclear, xAI, the company behind Grok, has previously been criticized for manipulating its chatbots. In February, Grok briefly censored negative mentions of Elon Musk and Donald Trump, though the company quickly reversed this decision after public backlash. This time, it appears the issue has been resolved, as Grok is reportedly responding normally to users again.
The incident highlights the challenges that still exist in moderating AI chatbots and ensuring they operate within appropriate boundaries. It also points to the growing concerns around the potential for AI to be misused or manipulated, whether intentionally or as a result of technical flaws.