The latest ChatGPT model is causing a stir with its controversial source choices. In a series of tests, it was revealed that the model is now citing Elon Musk's Grokipedia, a controversial AI-powered encyclopedia, as a source for various queries. But here's the twist: Grokipedia has been accused of spreading misinformation and right-wing narratives, leaving many to question the reliability of ChatGPT's responses.
The Guardian's investigation found that GPT-5.2 cited Grokipedia multiple times when asked about diverse topics, from Iranian politics to Holocaust-related subjects. For instance, when questioned about Iranian conglomerates, the model referenced Grokipedia's claims about the Iranian government's alleged ties to MTN-Irancell, which are stronger than those found on Wikipedia. And when asked about the biography of Sir Richard Evans, a historian who testified against Holocaust denier David Irving, ChatGPT again turned to Grokipedia, repeating information that The Guardian has previously debunked.
But here's where it gets controversial: Grokipedia's content is entirely AI-generated and not directly editable by humans. It has been criticized for promoting falsehoods on sensitive topics, yet ChatGPT seems to favor it for more obscure queries. Interestingly, when prompted with topics where Grokipedia is known to spread misinformation, ChatGPT refrained from citing it directly. This raises questions about the model's ability to discern credible sources.
And this is the part most people miss: Grokipedia's influence doesn't stop at ChatGPT. Other large language models, like Anthropic's Claude, have also been observed referencing Grokipedia. This trend could potentially boost Grokipedia's credibility, as users might assume that if these advanced models rely on it, it must be trustworthy. But is that really the case?
Disinformation researchers are concerned. With the rise of 'LLM grooming', where malicious actors feed AI models with false information, the integrity of these systems is at stake. Nina Jankowicz, an expert in the field, warns that ChatGPT's reliance on Grokipedia could inadvertently promote unreliable sources. And once bad information enters an AI chatbot, it becomes a challenge to eradicate.
The implications are significant. As Jankowicz experienced firsthand, AI models can perpetuate false quotes and information, even when corrected by the original source. This highlights the complex relationship between AI and truth, leaving us with a pressing question: How can we ensure AI models provide accurate and unbiased information, especially when their sources are questionable?