Mistral recently announced an upgrade to its open-source code generation model, Codestral, with the launch of the new version Codestral25.01. This update significantly enhances the model's competitiveness in the programming field, aiming to provide developers with a more efficient code generation experience.

image.png

According to Mistral's blog post, the architecture of Codestral25.01 has been optimized, promising to become the "absolute leader" among similar models, with code generation speed doubled compared to previous versions. This new version retains the advantages of the original model, focusing on low latency and high-frequency operations, supporting tasks such as code correction, test generation, and code completion. Mistral stated that this is particularly important for enterprises with large amounts of data and scenarios where the model is deployed.

image.png

In various benchmark tests, Codestral25.01 excelled in Python coding tests, achieving a high score of 86.6% in the HumanEval test, surpassing previous versions of Codestral, Codellama70B Instruct, and DeepSeek Coder33B Instruct.

Developers can use Codestral25.01 through Mistral's IDE plugin partners. Additionally, users can access the model's API via Mistral's platform and Google Vertex AI. Currently, the model is available for preview on Azure AI Foundry and will be launched on Amazon Bedrock.

image.png

Since the initial release of Codestral in May 2023, Mistral has continuously pushed for upgrades and innovations in its products. The previously launched Codestral-Mamba model, based on the Mamba architecture, can generate longer code strings and handle more input. Notably, Codestral25.01 quickly climbed to the top of the Copilot Arena leaderboard just hours after Mistral's announcement, indicating strong market interest in this new model.

Writing code has been one of the early functionalities of foundational models. Although general models like OpenAI's o3 and Anthropic's Claude also apply this capability, focused programming models have made significant progress over the past year, often outperforming some large general models. Recently, Alibaba, DeepSeek Coder, and Microsoft have also released new programming models, intensifying the competition.

Among many developers, there is still debate over whether to choose general models or models focused on programming. Some developers prefer to use general models like Claude, while the demand for programming tasks has driven the emergence of specialized models. Since Codestral is specifically trained on coding data, its performance in programming tasks is naturally superior.

Official Blog: https://mistral.ai/news/codestral-2501/

Key Points:  

🌟 Mistral launches Codestral25.01, doubling the code generation speed of the previous version.  

💻 The model excels in Python coding tests, achieving an 86.6% score in the HumanEval test.  

📈 Codestral25.01 quickly rose to the top of the Copilot Arena, garnering widespread attention from developers.