Recent reports have revealed that the highly anticipated and well-funded AI search tool Perplexity is citing low-quality, even erroneous, AI-generated junk information from unreliable blogs and LinkedIn posts.
GPTZero, a startup specializing in detecting AI-generated content, recently conducted an in-depth investigation into Perplexity. The company's CEO, Edward Tian, pointed out in a blog post earlier this month that he noticed an "increasing number of AI-generated" sources linked by Perplexity.
Subsequently, he examined Perplexity's reuse of these AI-generated information and found that in some cases, Perplexity even extracted outdated and incorrect information from these sources. In other words, this is a loop of AI-driven misinformation, where AI's errors and fictional information enter Perplexity's AI-generated answers.
For example, when asked about "cultural festivals in Kyoto, Japan," Perplexity provided a seemingly coherent list of cultural attractions in Japanese cities. However, it only cited one source: an obscure blog post published on LinkedIn in November 2023, likely generated by AI itself. This is a far cry from Perplexity's claim of using "news agencies, academic papers, and established blogs."
For a struggling startup that claims to "revolutionize the way you discover information" by providing "precise knowledge" from "reliable sources" with "up-to-date information," this is a poor image.
Key Points:
🚩 Perplexity has been exposed for citing erroneous AI-generated junk information, sourced from dubious blogs and LinkedIn articles.
🚩 GPTZero detected an increasing number of AI-generated sources linked by Perplexity, which sometimes even uses outdated and incorrect information from these sources.
🚩 Perplexity claims its answers are sourced only from "reliable sources," but it is crucial whether its AI algorithm can truly extract good information from good sources.