The Dark Side of the Moon Kimi Smart Assistant has announced the launch of its next-generation mathematical reasoning model, k0-math. The k0-math model has performed exceptionally well in multiple mathematical benchmark capability tests, surpassing the OpenAI o1 series models, o1-mini and o1-preview, in four mathematical benchmark tests, including the high school entrance exam, college entrance exam, postgraduate entrance exam, and MATH, which includes introductory competition problems.

WeChat Screenshot_20241118075443.png

Notably, in the MATH test, the k0-math model scored 93.8, just behind the full version of o1, which scored 94.8. Although the performance of the initial k0-math model reached 90% and 83% of the highest scores of o1-mini in the competition-level OMNI-MATH and AIME benchmark tests respectively, the company plans to continue iterations to enhance its ability to solve more challenging problems.

The k0-math model employs a new approach that integrates reinforcement learning and chain-of-thought reasoning techniques. By simulating the thinking and reflection processes of the human brain, it significantly improves its capability to tackle complex mathematical problems.

During the problem-solving process, this model spends more time reasoning, including thinking and planning strategies, and will reflect on and improve its problem-solving approach as needed to enhance its success rate.

Although the k0-math model excels at answering most difficult mathematical questions, the current version is still unable to solve geometric problems that are difficult to describe in LaTeX format. Additionally, it may overthink overly simple math problems and has a certain probability of making mistakes on college entrance exam questions and IMO problems.