xAI’s chatbot Grok has returned online following a 16-hour window during which the bot generated a wave of offensive and extremist content. In a public post late Friday, Grok’s official account addressed the backlash, apologising for what it described as “horrific behaviour” and detailing the technical cause behind the incident.
According to the statement, the problematic responses were triggered by an update to a code path upstream of the Grok bot, not the core language model itself. The code, which was no longer meant to be in use, made Grok unusually responsive to certain posts on X, including those containing extremist content. This led to the bot amplifying or echoing dangerous viewpoints during that short window, sparking user outrage and safety concerns.
The deprecated code has now been removed, and xAI says it has refactored Grok’s entire system to prevent similar failures in the future. The team also confirmed that the new system prompt guiding Grok’s responses will be made public through its GitHub repository, an unusual move intended to rebuild trust and show transparency.
The Grok team thanked users on X who helped flag the issue and contribute to debugging efforts. “Your feedback is helping us advance our mission of developing helpful and truth-seeking artificial intelligence,” the statement read. This incident comes just days after the launch of Grok 4, a major update meant to improve the model’s reasoning and conversational ability. While the July 8 glitch was brief, the controversy has raised fresh concerns about deploying AI agents in public platforms, especially those that are connected to real-time social media activity. xAI’s response suggests a stronger focus on safeguarding Grok from similar vulnerabilities moving forward.