Recently, scientists have discovered a phenomenon known as the "reverse curse"—large models are unable to perform reverse reasoning. Research papers, through both virtual and real-world scenario experiments, indicate that this bug exists in top-tier large models regardless of their size. This exposes the limitations of large models in logical reasoning, which could potentially impact critical application areas, casting doubts on their reliability. As AI applications based on large models become increasingly widespread, the reverse curse serves as a warning not to be overly optimistic about their reliability.
Large Model Inference Bug! All Answers Are Wrong Due to Problem Reversal, No Model Is Exempt from GPT to Llama

36氪
This article is from AIbase Daily
Welcome to the [AI Daily] column! This is your daily guide to exploring the world of artificial intelligence. Every day, we present you with hot topics in the AI field, focusing on developers, helping you understand technical trends, and learning about innovative AI product applications.