A new study by the Digital News Center at the Columbia Journalism Review found that popular AI search tools provide inaccurate or misleading information over 60% of the time when answering questions. This is concerning because these tools not only erode public trust in news reporting but also cause publishers to lose both traffic and revenue.

Robot, AI Writing, AI Education

Image Source: AI-generated image, licensed through Midjourney

Researchers tested eight generative AI chatbots, including ChatGPT, Perplexity, Gemini, and Grok, asking them to identify excerpts from 200 recent news articles. The results showed that over 60% of the answers were incorrect. These chatbots frequently fabricated headlines, failed to cite articles, or cited unauthorized content. Even when they correctly identified the publisher, links often pointed to dead URLs, republished versions, or irrelevant pages.

Disappointingly, these chatbots rarely expressed uncertainty, instead offering incorrect answers with undue confidence. For example, ChatGPT provided 134 incorrect pieces of information out of 200 queries but only expressed doubt 15 times. Even paid versions like Perplexity Pro and Grok3 performed poorly, with a higher number of incorrect answers despite their monthly costs of $20 and $40 respectively.

Regarding content attribution, multiple chatbots failed to adhere to publisher restrictions. Five chatbots even ignored the widely accepted robots.txt protocol. Perplexity, for instance, correctly cited a National Geographic article despite the publisher restricting its web crawler. Meanwhile, ChatGPT re-cited a USA Today article behind a paywall through unauthorized Yahoo News.

Furthermore, many chatbots directed users to republished articles on platforms like AOL or Yahoo instead of the original source, even when licensing agreements were in place with AI companies. For example, Perplexity Pro cited a republished version from the Texas Tribune without proper attribution. Grok3 and Gemini frequently invented URLs; Grok3 linked to incorrect pages in 154 out of 200 answers.

This research highlights a growing crisis for news organizations. More Americans are using AI tools as information sources, but unlike Google, chatbots don't drive traffic to websites. Instead, they summarize content without linking back, depriving publishers of advertising revenue. Danielle Coffey of the News Media Alliance warned that without control over web crawlers, publishers won't be able to effectively "monetize valuable content or pay journalists’ salaries."

After contacting OpenAI and Microsoft, the researchers received defensive responses but no engagement with the specific findings. OpenAI stated that it "respects publishers' preferences" and helps users "discover quality content," while Microsoft claimed adherence to "robots.txt" protocols. The researchers emphasized that the erroneous attribution practices are a systemic issue, not a phenomenon limited to individual tools. They called for AI companies to improve transparency, accuracy, and respect for publishers' rights.

Key Takeaways:

📊 The study found that AI chatbots have an error rate exceeding 60%, severely impacting the credibility of news.

📰 Multiple chatbots ignored publisher restrictions, citing unauthorized content and incorrect links.

💰 News organizations face a dual crisis of traffic and revenue loss as AI tools gradually replace traditional search engines.