The emergence of large language models (LLMs), particularly the widespread adoption of applications like ChatGPT, has revolutionized the way humans interact with machines. These models are capable of generating coherent and comprehensive text, leaving a lasting impression. However, despite their powerful capabilities, LLMs are prone to producing "hallucinations," which are content that appears genuine but is actually fabricated, meaningless, or inconsistent with the prompt.

image.png

Researchers at Harvard University have conducted in-depth studies on the phenomenon of LLM "hallucinations," finding that the root cause lies in the working principles of LLMs. LLMs construct probabilistic models by machine learning from vast amounts of text data and predict the next word based on the probability of word co-occurrence. In other words, LLMs do not truly understand the meaning of language but rather predict based on statistical probabilities.

Researchers compare LLMs to "crowdsourcing," suggesting that LLMs are essentially outputting the "consensus of the web." Similar to platforms like Wikipedia or Reddit, LLMs extract information from large volumes of text data and generate the most common answers. Since most language use is intended to describe the world, the answers generated by LLMs are usually accurate.

However, when LLMs encounter ambiguous, controversial, or consensus-lacking topics, "hallucinations" occur. To validate this hypothesis, researchers designed a series of experiments to test the performance of different LLMs on various topics. The experimental results showed that LLMs perform well on common topics but significantly decline in accuracy when dealing with ambiguous or controversial topics.

This study indicates that while LLMs are powerful tools, their accuracy depends on the quality and quantity of the training data. When using LLMs, especially when dealing with ambiguous or controversial topics, caution is needed regarding their output. This research also provides directions for the future development of LLMs, such as improving their ability to handle ambiguous and controversial topics and enhancing the interpretability of their output.

Paper link: https://dl.acm.org/doi/pdf/10.1145/3688007