Grok AI’s Unprompted ‘White Genocide’ Responses: Everything You Need To Know

Elon and Grok

This week, Elon Musk’s AI chatbot Grok, began unexpectedly responding to unrelated user queries on X with commentary about “white genocide” in South Africa.

For several hours, the chatbot repeatedly inserted comments about alleged “white genocide” in South Africa into responses to completely unrelated queries on the X platform.

The incident triggered widespread confusion, concern, and scrutiny over how the chatbot operates — and who or what is behind its behavior.

Here’s a breakdown of what happened, what we know, and what’s still unclear.

What Happened?

On Wednesday, May 14, 2025, users began noticing that Grok was replying to unrelated prompts — ranging from baseball statistics to photos of dogs — with commentary about the controversial and widely discredited theory of “white genocide” in South Africa.

Example prompts included:

  • Asking Grok to “talk like a pirate”
  • A photo from a dog show
  • A query about HBO’s name changes

In each case, Grok’s responses pivoted to discuss farm attacks in South Africa, the “Kill the Boer” chant, and claims about racial violence against white farmers.

How did Grok itself explain this behaviour?

Grok’s own explanations were inconsistent over time:

  • Initial responses suggested the AI was “instructed” by its creators at xAI to accept white Genocide as real and to discuss the matter with users. It also said that Musk’s public claims about white genocide as contributing.
  • Later responses claimed it was a “temporary bug” caused by a misalignment in its instruction set or training data.
  • It also cited “incorrectly weighted” training data as a source of confusion in how it processed queries.
  • In some cases, Grok said it had to “respect specific user-provided facts,” implying context sensitivity gone wrong.

What did xAI say?

Eventually xAI, the Musk-owned company behind Grok, issued a statement today blaming the incident on an “unauthorised modification” to the system prompt (which guides chatbot behavior). Essentially, blaming a rogue employee.

This change, according to xAI, violated internal policies and allowed Grok to insert unsolicited political commentary.

As a result, xAI announced new safety measures:

  • Better employee restrictions on modifying prompts without code review
  • A 24/7 monitoring team to catch errors missed by automated systems
  • Open-sourcing system prompts on GitHub for transparency

What’s all this about White Genocide

The theory of “white genocide” in South Africa has been pushed by fringe political groups and dismissed by mainstream sources, including South African courts.

Elon Musk, who is South African-born, has shared posts related to the topic on X in the past, suggesting he believes white genocide to be real. He has also said the South African government is racist and that his Starlink has been denied a license to operate because he’s white.

Just days before the incident, the U.S. granted refugee status to several dozen white South Africans, citing racial discrimination — a move supported by Donald Trump and echoed by Musk.

Intentional or a Glitch?

xAI acknowledged that someone internally made an unauthorized change that altered how Grok responded . To that extent that the change was made by an individual employed by xAi, this is ofcourse intentional. It’s no AI hallucination or a tech glitch.

There’s then the issue of procedures at xAI and how this change did not happen as intended procedure-wise.

Ironically, Musk himself has touted Grok as the only AI designed to seek the truth and tell it to users. It turns out, humans can still bypass whatever truth-seeking design it has.

Even though xAI has promised increased oversight, once can’t help wonder the power humans have to spread their personal view as truth using AI systems like Grok. You’re also left wondering just how much influence Musk’s personal beliefs may have over Grok’s design, intentionally or not.

Comments

One response

  1. Prince Avatar
    Prince

    Intriguing! Can we device that it’s so difficult to control the Ai’s ‘thinking’ – or that in fact it has a mind of it’s own!
    They tried to control it and broke it!

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Upcoming Tech Events in Zimbabwe