The Tongyi Qianwen team recently announced the open-source release of its latest Qwen2.5-Coder series, aiming to promote the development of Open Code LLMs. Qwen2.5-Coder has garnered attention for its strength, diversity, and practicality. The Qwen2.5-Coder-32B-Instruct model has achieved SOTA levels in coding capabilities, comparable to GPT-4o, demonstrating comprehensive abilities including code generation, code repair, and code reasoning. It has achieved top performance in multiple code generation benchmark tests and scored 73.7 on the Aider benchmark, matching GPT-4o.
Qwen2.5-Coder supports over 40 programming languages and scored 65.9 on McEval, with particularly outstanding performance in languages like Haskell and Racket, thanks to its unique data cleaning and ratio during the pre-training phase. Additionally, Qwen2.5-Coder-32B-Instruct excels in code repair capabilities across multiple programming languages, scoring 75.2 on the MdEval benchmark, ranking first.
To assess Qwen2.5-Coder-32B-Instruct's alignment with human preferences, an internally labeled code preference evaluation benchmark, Code Arena, was constructed. The results show that Qwen2.5-Coder-32B-Instruct has an advantage in preference alignment.
This open-source release includes four sizes of models: 0.5B/3B/14B/32B, covering six mainstream model sizes, catering to the needs of different developers. The official provides Base and Instruct models, with the former serving as a foundation for developers to fine-tune and the latter as an officially aligned chat model. There is a positive correlation between model size and performance, with Qwen2.5-Coder achieving SOTA performance at all sizes.
The 0.5B/1.5B/7B/14B/32B models of Qwen2.5-Coder are licensed under Apache2.0, while the 3B model is under a Research Only license. The team has verified the effectiveness of scaling in Code LLMs by evaluating the performance of different sizes of Qwen2.5-Coder across all datasets.
The open-source release of Qwen2.5-Coder provides developers with a powerful, diverse, and practical programming model choice, contributing to the development and application of programming language models.
Qwen2.5-Coder Model Links:
https://modelscope.cn/collections/Qwen25-Coder-9d375446e8f5814a