A recent study led by the Technion - Israel Institute of Technology has found that large language models (LLMs) may be "hiding their true capabilities," possessing more knowledge than they actually demonstrate. Researchers discovered that the internal representations of LLMs encode information about the correctness of their outputs, even when they ultimately generate incorrect answers, they can internally identify the correct ones.

The research team focused on analyzing errors in LLM's long-text generation, which is more relevant to their real-world applications. They constructed an error detection dataset, comparing the model-generated answers with the actual answers to judge their correctness, and based on this, studied the locations in LLM's internal representations where truthfulness signals are encoded.

image.png

The study found that, unlike previous research focusing on the last generated token or averages, truthfulness information is concentrated in the "exact answer tokens," those that, when modified, would change the correctness of the answer. For example, in the question "What is the capital of Connecticut?", the exact answer token is "Hartford."

To identify exact answer tokens, researchers used an external algorithm that could extract exact answers from the model's lengthy outputs. Experimental results showed that all evaluated LLMs were able to extract exact answers from their own outputs.

Through experiments with different models and datasets, researchers found that using exact answer tokens could significantly improve the performance of error detection methods, especially when probing the model's internal representations.

More surprisingly, even when the model did not show a preference for the correct answer during the generation process, the detector was still able to effectively identify the correct answer. This indicates a significant disconnect between the internal coding and external behavior of LLMs; even if the model internally knows the correct answer, it may still give an incorrect answer when generating text.

This research is of great significance for the error analysis and improvement of LLMs. By deeply understanding how truthfulness signals are encoded in LLM's internal representations, more effective error detection and correction methods can be developed, thereby enhancing the reliability and practicality of LLMs.

Paper link: https://arxiv.org/pdf/2410.02707