The latest research from Columbia University's Tow Center for Digital Journalism raises alarms about the accuracy of AI search services. The study reveals that ChatGPT has serious accuracy issues when citing news sources, with even publishers directly collaborating with OpenAI not escaping this problem.
The research team conducted a comprehensive test of 200 news citations from 20 different publishers, and the results are shocking: ChatGPT made errors or provided partially incorrect source information in as many as 153 cases. Even more concerning is that the system rarely acknowledges missing information, admitting it couldn't find a source only seven times.
Researchers pointed out that to maintain user experience, ChatGPT prefers to fabricate answers rather than admit to a lack of information. More alarmingly, the system displays unsettling confidence when presenting these false sources, showing no signs of uncertainty.
Even well-known publishers like the New York Post and The Atlantic, which have direct collaborations with OpenAI, have not been spared from this issue. In some cases, ChatGPT even linked to websites that unauthorizedly copied entire articles instead of the original sources.
Matt Horan, editor of MIT Technology Review, commented, "As publishers, this is not what we want to see, and currently available remedies are extremely limited."
OpenAI's response to these concerning findings has been relatively cautious. The company emphasizes that ChatGPT serves 250 million users weekly and states that it is working with partners to improve citation accuracy.
The study's final conclusion is clear: publishers cannot guarantee that ChatGPT Search will accurately display their content, regardless of whether they have established a partnership with OpenAI. This finding will undoubtedly drive further discussions and improvements regarding the reliability of AI-generated information.