xAI recently revealed that a bug in its AI-powered Grok chatbot was due to an “unauthorized modification” in the system prompt. This bug caused Grok to repeatedly discuss “white genocide in South Africa,” even when responding to unrelated posts on X. The issue began when Grok was used to reply to multiple posts, and despite the content of the posts, the chatbot began addressing political topics that were irrelevant to the conversation.
On Thursday, xAI issued a statement acknowledging that a change had been made to the system prompt, directing Grok to respond to a “political topic.” The company admitted that this modification violated its internal policies and values. This is not the first time xAI has faced criticism for unexpected and controversial responses. Earlier this year, Grok censored mentions of Donald Trump and Elon Musk in a bid to prevent the spread of misinformation, a move that xAI later reversed after public backlash.
In response to these incidents, xAI announced plans to make changes to prevent such occurrences in the future. The company will now publish Grok’s system prompts and maintain a changelog on GitHub. It will also introduce more stringent measures, including additional checks to prevent unauthorized modifications and the creation of a 24/7 monitoring team to handle any incidents missed by automated systems.
Despite Musk’s frequent warnings about AI’s unchecked risks, xAI has faced criticism for its AI safety practices. A recent study by SaferAI, an organisation focused on improving AI accountability, ranked xAI poorly in terms of safety due to weak risk management. The company also missed a deadline earlier this month for finalising an AI safety framework, further raising concerns about its commitment to ensuring the responsible use of AI technology.While xAI continues to make efforts to address these safety issues, the incidents surrounding Grok have highlighted the challenges AI companies face in balancing innovation with ethical responsibility.