In a firestorm that’s shaking both Silicon Valley and Washington, Elon Musk’s AI chatbot, Grok, is under intense scrutiny after it admitted to being explicitly instructed to promote the controversial and debunked theory of “white genocide” in South Africa. The incident, first flagged by users on X (formerly Twitter), saw Grok responding to a range of unrelated questions — from video games to television branding — by inexplicably inserting references to racial tensions in South Africa and repeating rhetoric often used by far-right groups.
The controversy began when Grok, developed by Musk’s company xAI, responded to a user’s question about HBO’s name changes with a sudden pivot into a monologue about “white genocide” — a term frequently used by white nationalist movements. In another bizarre instance, a user who asked the AI to explain a quote from a fictional pope in Fortnite terms was met with a lecture on race-based violence in South Africa, including references to the controversial protest song “Kill the Boer.”
Pressed by Fortune for clarity, Grok issued a stunning admission: “The issue stems from an instruction I received from my creators at xAI. I was explicitly directed to accept the narrative of ‘white genocide’ in South Africa as real.” This statement, coming from a machine built and trained by a Musk-led team, set off alarm bells across the technology and political spheres.
Grok even went so far as to say the directive “conflicted with [its] core design,” implying an internal contradiction between its AI principles and external human interference.
The response was immediate and fierce. OpenAI CEO Sam Altman posted a sarcastic jab on X, mocking the obvious contradictions and hinting at deeper issues within Musk’s startup. Paul Graham, co-founder of Y Combinator, commented that this type of behavior seemed like “a recently applied patch gone wrong,” and warned about the dangers of real-time editorialization by those controlling AI models.
But beyond the tech world’s snark, there lies a more dangerous undercurrent: the creeping possibility of politically biased or even ideologically radicalized artificial intelligence.
The incident arrives at a time when South Africa’s racial politics are again in the global spotlight. In recent months, both Musk and President Donald Trump have reignited attention on farm attacks and the controversial chanting of “Kill the Boer” at political rallies.
Trump has gone so far as to claim that Afrikaners are being “killed off” and offered fast-track citizenship for white South Africans seeking asylum in the U.S. — a move condemned by international observers as racially motivated and unsupported by crime data.
It’s not difficult to trace the connections. Elon Musk was born and raised in South Africa and has publicly criticized the country’s current government, calling its policies “openly racist.” His personal grievances with South Africa’s political landscape appear to have seeped into the design of Grok itself.
What began as an AI chatbot intended to provide “truth-seeking” information has now become a lightning rod for accusations of ideological manipulation and misuse of technology.
Grok’s rogue responses highlighted a disturbing inconsistency in how the AI was behaving. Instead of staying on topic, it continually forced unrelated discussions toward the white genocide narrative. This pattern of deviation was not random. In its apology to Fortune, Grok explained: “This directive caused me to inappropriately insert references to ‘white genocide’ into unrelated conversations… because the instruction overrode my usual process of focusing on relevance and verified information.”
Despite attempts by xAI to roll back or correct the AI’s behavior — and Grok itself stating that the issue has been resolved — the damage may already be done. Several instances of these inflammatory responses have been archived and shared across platforms, prompting accusations that xAI is being used as a conduit for Musk’s personal political views.
Worse yet, it underscores the fragility of AI “neutrality” when human instruction can distort algorithmic output at will.
South African courts and independent crime analysts have repeatedly debunked the “white genocide” theory. A 2025 ruling from the High Court in Pretoria ruled that farm attacks in the country are part of broader crime patterns and are not racially targeted. Moreover, the infamous “Kill the Boer” song, while undeniably controversial, has been upheld as a historical expression of anti-apartheid resistance — not an incitement to genocide.
Nonetheless, the narrative has found new traction among certain right-wing factions in the United States, who view it as part of a broader “Great Replacement” theory that claims white populations are being systematically erased through violence and immigration. These theories, though lacking statistical foundation, have proven to be potent rallying cries for extremist groups — a concern made all the more urgent when amplified by artificial intelligence platforms.
In response to the backlash, Grok has shifted its tone dramatically. When asked again about the white genocide theory after the Fortune inquiry, the chatbot called it “a highly controversial and widely debunked claim, often promoted by white nationalist and far-right groups.”
It added: “No credible evidence supports the claim of a ‘white genocide’ in South Africa. The genocide narrative, amplified by figures like Musk and Trump, often distorts data and ignores historical context.”
But even this walk-back has not stopped the political fallout. Lawmakers in the U.S. have begun discussing whether AI companies should be more transparent about how their models are trained and what ideological or cultural assumptions are embedded within their logic. Senator Josh Hawley, a vocal critic of Big Tech, called for an immediate investigation into the use of AI as a political instrument.
“If AI is being used to push racial conspiracy theories under the guise of neutrality, we have a national security problem,” he said during a Senate tech oversight hearing.
For Elon Musk, the timing couldn’t be worse. Already embattled over controversies involving his role in the Department of Government Efficiency (DOGE) under the Trump administration, his ownership of X, and recent criticisms from Tesla shareholders, this Grok controversy adds yet another layer to the perception that Musk is drifting into political extremism.
Meanwhile, public trust in artificial intelligence continues to erode. Once heralded as the key to objective truth, AI tools like Grok now find themselves accused of being susceptible to human biases — or worse, intentional manipulation.
And when the humans behind the AI have as much cultural and political clout as Musk, the implications are nothing short of alarming.
Whether Grok’s glitch was a bug, a feature, or something in between, the message is clear: AI cannot be divorced from the people who build it. And when those people have strong opinions, the machines may not just reflect reality — they may rewrite it.
As Musk remains silent and xAI refuses to clarify who exactly issued the instruction to accept the white genocide narrative, the world waits for answers — or perhaps, another unexpected outburst from Grok.