Large Language Models (LLMs) like the GPT series exhibit astonishing abilities in language understanding, reasoning, and planning, achieving human-level performance in various challenging tasks due to their vast datasets. Most research focuses on enhancing these models further by training them on larger datasets, aiming to develop more powerful foundational models. However, while training stronger foundational models is crucial, researchers believe that empowering models with the capability to continue evolving during the reasoning phase, known as AI self-evolution, is essential for the development of AI.