This year, large language models (LLMs) have garnered significant attention in the AI community, particularly for their remarkable progress in natural language processing. A new study has found that LLMs struggle to identify errors in reasoning tasks, but errors can be corrected using a proposed backtracking method. Research indicates that LLMs are unable to self-correct reasoning errors, but can utilize the backtracking method for correction when provided with erroneous information. The article summarizes the latest datasets and test results, revealing the challenges that the best current LLMs face in error detection.