A new trend has emerged on Elon Musk's social media platform X (formerly Twitter): some users are employing Grok, the AI chatbot developed by Musk's AI company xAI, for "fact-checking." This practice has raised concerns among professional human fact-checkers, who fear it could exacerbate the spread of misinformation.

Grok, Musk, xAI

Early this month, X enabled users to directly query xAI's Grok. Users can ask Grok questions on various topics, similar to how Perplexity operates automated accounts on X providing a similar service.

Shortly after xAI's Grok automated account launched on X, users began experimenting with it. In markets including India, some users started using Grok to "fact-check" comments and questions related to specific political viewpoints.

However, human fact-checkers express concern about using Grok or similar AI assistants for "fact-checking," as these bots can construct convincingly-sounding answers even when providing false information. Grok has a history of spreading fake news and misleading information.

Last August, five U.S. secretaries of state urged Musk to make key adjustments to Grok after it spread misleading information on social media in the run-up to the U.S. elections. Other chatbots, including OpenAI's ChatGPT and Google's Gemini, were also found to generate inaccurate information about the elections last year. Furthermore, anti-misinformation researchers found in 2023 that AI chatbots, including ChatGPT, could easily generate persuasive, misleading narratives.

Angie Holan, director of the International Fact-Checking Network (IFCN), told TechCrunch: "AI assistants like Grok are very good at using natural language and giving answers that sound like a human being said them. That gives them a sense of naturalness and authenticity even if the AI product is potentially seriously wrong. That's where the danger lies."

One X user asked Grok to "fact-check" another user's claim. Unlike AI assistants, human fact-checkers use multiple credible sources to verify information. They also take full responsibility for their findings, attributing them with their name and organization to ensure credibility.

Pratik Sinha, co-founder of the Indian non-profit fact-checking website Alt News, points out that while Grok currently provides convincing answers, its capabilities depend entirely on the quality of the data it accesses. He emphasizes: "Who decides what data it gets? Issues like government intervention come into play. Lack of transparency is inherently harmful because anything that lacks transparency can be easily shaped...it can be misused—to spread misinformation."

In a response earlier this week, Grok's official account on X admitted that it "can be misused—to spread misinformation and violate privacy." However, the automated account doesn't display any disclaimer when giving answers. Users might be misled if the answer is a hallucination, a potential drawback of AI. Anushkha Jain, a researcher at the Digital Futures Lab, a Goa-based interdisciplinary research institution, told TechCrunch: "It might fabricate information to provide a response."

Furthermore, questions remain about the extent to which Grok uses posts from X as training data and what quality control measures it uses to verify the veracity of those posts. Last summer, X made a change that seemingly allowed Grok to use public data from X users by default.

Another concerning aspect of AI assistants (like Grok) disseminating information publicly through social media platforms is that even if some users clearly know that information from an AI assistant may be misleading or inaccurate, others on the platform may still believe it. This can have serious societal consequences. In India, misinformation spread through WhatsApp has previously led to mob violence, although those serious incidents predate generative AI, which now makes the creation of synthetic content easier and more realistic.

Holan of the IFCN stated: "If you see a lot of Grok's answers, you might think most of them are right, and maybe they are, but some are definitely wrong. What's the percentage of error? It's not a small percentage. Some research suggests that AI models have error rates as high as 20%...and when they get it wrong, it can have serious real-world consequences."

AI companies, including xAI, are improving their AI models to communicate more like humans, but they are not—and will not be in the future—a replacement for humans. In recent months, tech companies have been exploring ways to reduce reliance on human fact-checkers. Platforms including X and Meta have begun experimenting with crowdsourced fact-checking through so-called "community notes." Of course, these changes have also raised concerns among fact-checkers.

Sinha of Alt News is optimistic that people will eventually learn to distinguish between machine and human fact-checkers and will place more value on human accuracy. Holan also believes: "We'll eventually see the pendulum swing back to a greater emphasis on fact-checking." However, she notes that in the meantime, fact-checkers may have more work to do as AI-generated information spreads rapidly. She also states: "A big part of this problem depends on whether you really care about what's true, or whether you're just looking for something that sounds and feels true but isn't. Because that's what AI assistants can deliver."

X and xAI did not respond to requests for comment.

Key Takeaways:

  • 🤖 Musk's AI chatbot Grok is being used for fact-checking on X, raising concerns among human fact-checkers about the spread of misinformation.
  • ⚠️ AI assistants like Grok may generate believable but inaccurate answers, lacking transparent quality control and data sources.
  • 👨‍👩‍👧‍👦 Human fact-checkers rely on multiple credible sources and take responsibility, and AI cannot replace the human role in verifying information.