May 18, 2025 12:40:08 am
Connect the dots. In the same week that the International Court of Justice held hearings into allegations of genocide agaist Israel, Afrikaner ‘refugees’ arrived in America, and Elon Musk’s Grok AI started telling users about the debunked claim of white genocide supposedly taking place in South Africa. Grok would even bring it up in completely unrelated queries, proving that the AI was programmed to lie.
SUPERFICIAL INTELLIGENCE: – In the two-plus years since generative artificial intelligence took the world by storm following the public release of ChatGPT, trust has been a perpetual problem.
Hallucinations, bad math and cultural biases have plagued results, reminding users that there’s a limit to how much we can rely on AI, at least for now.
Elon Musk’s Grok chatbot, created by his startup xAI, showed this week that there’s a deeper reason for concern: The AI can be easily manipulated by humans.
Grok on Wednesday began responding to user queries with false claims of “white genocide” in South Africa. By late in the day, screenshots were posted across X of similar answers even when the questions had nothing to do with the topic.
After remaining silent on the matter for well over 24 hours, xAI said late Thursday that Grok’s strange behavior was caused by an “unauthorized modification” to the chat app’s so-called system prompts, which help inform the way it behaves and interacts with users. In other words, humans were dictating the AI’s response.
WHITE LIES: The nature of the response, in this case, ties directly to Musk, who was born and raised in South Africa. Musk, who owns xAI in addition to his CEO roles at Tesla and SpaceX, has been promoting the false claim that violence against some South African farmers constitutes “white genocide,” a sentiment that President Donald Trump has also expressed.
I think it is incredibly important because of the content and who leads this company, and the ways in which it suggests or sheds light on kind of the power that these tools have to shape people’s thinking and understanding of the world,” said Deirdre Mulligan, a professor at the University of California at Berkeley and an expert in AI governance.
Mulligan characterized the Grok miscue as an “algorithmic breakdown” that “rips apart at the seams” the supposed neutral nature of large language models. She said there’s no reason to see Grok’s malfunction as merely an “exception.”
AI-powered chatbots created by Meta, Google and OpenAI aren’t “packaging up” information in a neutral way, but are instead passing data through a “set of filters and values that are built into the system,” Mulligan said. Grok’s breakdown offers a window into how easily any of these systems can be altered to meet an individual or group’s agenda.
Representatives from xAI, Google and OpenAI didn’t respond to requests for comment. Meta declined to comment.
Grok’s unsanctioned alteration, xAI said in its statement, violated “internal policies and core values.” The company said it would take steps to prevent similar disasters and would publish the app’s system prompts in order to “strengthen your trust in Grok as a truth-seeking AI.”
Experts told CNBC that the Grok incident is reminiscent of China’s DeepSeek, which became an overnight sensation in the U.S. earlier this year due to the quality of its new model and that it was reportedly built at a fraction of the cost of its U.S. rivals.
Critics have said that DeepSeek censors topics deemed sensitive to the Chinese government. Like China with DeepSeek, Musk appears to be influencing results based on his political views, they say.
When xAI debuted Grok in November 2023, Musk said it was meant to have “a bit of wit,” “a rebellious streak” and to answer the “spicy questions” that competitors might dodge. In February, xAI blamed an engineer for changes that suppressed Grok responses to user questions about misinformation, keeping Musk and Trump’s names out of replies.
PUBLIC OUTCRY: But Grok’s recent obsession with “white genocide” in South Africa is more extreme.
Petar Tsankov, CEO of AI model auditing firm LatticeFlow AI, said Grok’s blowup is more surprising than what we saw with DeepSeek because one would “kind of expect that there would be some kind of manipulation from China.”
Without a public outcry, “we will never get to deploy safer models,” Tsankov said, and it will be “people who will be paying the price” for putting their trust in the companies developing them.
“Whether it’s Grok, ChatGPT or Gemini — everyone expects it now,” Gualtieri said. “They’ve been told how the models hallucinate. There’s an expectation this will happen.”
Olivia Gambelin, AI ethicist and author of the book Responsible AI, published last year, said that while this type of activity from Grok may not be surprising, it underscores a fundamental flaw in AI models.
Gambelin said it “shows it’s possible, at least with Grok models, to adjust these general purpose foundational models at will.”
– Jonathan Vanian. – CNBC’s Lora Kolodny and Salvador Rodriguez contributed to this report.
- You shall not spread a false report. You shall not join hands with a wicked man to be a malicious witness. Exodus 23.
