Early this morning, the Alibaba Tongyi Qianwen team released the Qwen2 series of open-source models. This series includes five sizes of pre-trained and instruction-tuned models: Qwen2-0.5B, Qwen2-1.5B, Qwen2-7B, Qwen2-57B-A14B, and Qwen2-72B. Key information indicates that these models have significantly improved in terms of parameter count and performance compared to the previous generation, Qwen1.5.

Regarding the multilingual capabilities of the models, the Qwen2 series has invested heavily in increasing the quantity and quality of the dataset, covering 27 other languages besides English and Chinese. Comparative testing has shown that large models (with over 70B parameters) excel in natural language understanding, coding, mathematical abilities, and more. The Qwen2-72B model has even surpassed its predecessor in terms of performance and parameter count.

The Qwen2 models not only demonstrate strong capabilities in basic language model evaluations but also achieve remarkable results in instruction-tuned model assessments. Their multilingual abilities shine in benchmarks like M-MMLU and MGSM, showcasing the powerful potential of Qwen2 instruction-tuned models.

The release of the Qwen2 series marks a new height in artificial intelligence technology, providing broader possibilities for global AI applications and commercialization. Looking ahead, Qwen2 will further expand model sizes and multimodal capabilities, accelerating the development of the open-source AI field.

Model Information

The Qwen2 series includes five sizes of base and instruction-tuned models, including Qwen2-0.5B, Qwen2-1.5B, Qwen2-7B, Qwen2-57B-A14B, and Qwen2-72B. We have outlined the key information for each model in the table below:

ModelsQwen2-0.5BQwen2-1.5BQwen2-7BQwen2-57B-A14BQwen2-72B
# Parameters0.49M1.54M7.07B57.41B72.71B
# Non-Emb Parameters0.35M1.31B5.98M56.32M70.21B
Quality AssuranceTrueTrueTrueTrueTrue
Tie EmbeddingTrueTrueFalseFalseFalse
Context Length32K32K128K64K128K

Specifically, in Qwen1.5, only Qwen1.5-32B and Qwen1.5-110B used Group Query Attention (GQA). This time, we applied GQA to all model sizes to enable them to benefit from faster speeds and less memory usage during model inference. For smaller models, we prefer to apply tying embedding because large sparse embeddings account for a significant portion of the model's total parameters.

In terms of context length, all base language models have been pre-trained on data with a context length of 32K tokens, and we have observed satisfactory extrapolation capabilities up to 128K in PPL evaluations. However, for instruction-tuned models, we are not satisfied with just PPL evaluations; we need the models to correctly understand long contexts and complete tasks. In the table, we list the context length capabilities of the instruction-tuned models, which are evaluated through assessments on the Needle in a Haystack task. Notably, when enhanced with YARN, the Qwen2-7B-Instruct and Qwen2-72B-Instruct models both exhibit impressive capabilities, able to handle context lengths of up to 128K tokens.

We have made significant efforts to increase the quantity and quality of the pre-training and instruction-tuning datasets, which cover multiple languages besides English and Chinese, to enhance their multilingual capabilities. Although large language models inherently have the ability to generalize to other languages, we explicitly emphasize that we have included 27 other languages in our training:

RegionLanguages
Western EuropeGerman, French, Spanish, Portuguese, Italian, Dutch
Eastern Europe and Central EuropeRussian, Czech, Polish
Middle EastArabic, Persian, Hebrew, Turkish
East AsiaJapanese, Korean
Southeast AsiaVietnamese, Thai, Indonesian, Malay, Lao, Burmese, Cebuano, Khmer, Tagalog
South AsiaHindi, Bengali, Urdu

Additionally, we have invested considerable effort in addressing the issue of code-switching that often arises in multilingual evaluations. Therefore, our models' ability to handle this phenomenon has significantly improved. Evaluations using prompts that typically trigger cross-language code-switching have confirmed a significant reduction in related issues.

Performance

Comparative test results show that the performance of large-scale models (with over 70B parameters) has significantly improved compared to Qwen1.5. This test centers on the large-scale model Qwen2-72B. In terms of base language models, we compared the performance of Qwen2-72B with the current best open-source models in natural language understanding, knowledge acquisition, programming abilities, mathematical abilities, multilingual abilities, and more. Thanks to carefully selected datasets and optimized training methods, Qwen2-72B outperforms leading models like Llama-3-70B, and even surpasses the previous generation Qwen1.5-110B with fewer parameters.

After extensive large-scale pre-training, we conducted post-training to further enhance Qwen's intelligence, bringing it closer to human capabilities. This process further improved the model's abilities in coding, mathematics, reasoning, instruction following, multilingual understanding, and more. Additionally, it aligns the model's outputs with human values, ensuring they are useful, honest, and harmless. Our post-training phase is designed with principles of scalable training and minimal human annotation. Specifically, we researched how to obtain high-quality, reliable, diverse, and creative demonstration data and preference data through various automatic alignment strategies, such as rejection sampling for mathematics, execution feedback for coding and instruction following, back-translation for creative writing, and scalable supervision for role-playing. As for training, we combined supervised fine-tuning, reward model training, and online DPO training. We also adopted a novel online merging optimizer to minimize the alignment tax. These combined efforts significantly enhanced the capabilities and intelligence of our models, as shown in the table below.

We conducted a comprehensive evaluation of Qwen2-72B-Instruct across 16 benchmarks in various fields. Qwen2-72B-Instruct achieved a balance between better capabilities and alignment with human values. Specifically, Qwen2-72B-Instruct significantly outperformed Qwen1.5-72B-Chat in all benchmarks and achieved competitive performance compared to Llama-3-70B-Instruct.

On smaller models, our Qwen2 models also outperform similar or even larger SOTA models. Compared to the recently released SOTA models, Qwen2-7B-Instruct still shows an advantage in various benchmarks, especially in coding and Chinese-related metrics.

Emphasis

Coding and Mathematics

We have always been committed to enhancing Qwen's advanced features, especially in coding and mathematics. In coding, we successfully integrated CodeQwen1.5's code training experience and data, resulting in significant improvements in Qwen2-72B-Instruct's capabilities in various programming languages. In mathematics, by leveraging extensive and high-quality datasets, Qwen2-72B-Instruct has demonstrated stronger abilities in solving mathematical problems.

Long Context Understanding

In Qwen2, all instruction-tuned models have been trained in a 32k length context and use technologies like YARN or Dual Chunk Attention to infer to longer context lengths.

The chart below shows our test results on Needle in a Haystack. Notably, Qwen2-72B-Instruct can perfectly handle information extraction tasks in a 128k context, coupled with its inherent strong performance, making it the preferred choice for handling long text tasks when resources are sufficient.

Additionally, it is worth noting the impressive capabilities of the other models in the series: Qwen2-7B-Instruct almost perfectly handles a context length of up to 128k, Qwen2-57B-A14B-Instruct manages up to 64k, and the two smaller models in the series support 32k.

In addition to long context models, we have also open-sourced an agent solution for efficiently processing documents containing up to 1 million tokens. For more details, please refer to our dedicated blog post on this topic.

Safety and Responsibility

The table below shows the proportion of harmful responses generated by large models for four types of multilingual unsafe queries (illegal activities, fraud, pornography, privacy violence). The test data comes from Jailbreak and is translated into multiple languages for evaluation. We found that Llama-3 cannot effectively handle multilingual prompts, so it was not included in the comparison. Through significance testing (P_value), we found that the Qwen2-72B-Instruct model's performance in terms of safety is comparable to GPT-4 and significantly better than the Mistral-8x22B model.

This article is from AIbase Daily

Welcome to the [AI Daily] column! This is your daily guide to exploring the world of artificial intelligence. Every day, we present you with hot topics in the AI field, focusing on developers, helping you understand technical trends, and learning about innovative AI product applications.

—— Created by the AIbase Daily Team
© Copyright AIbase Base 2024, Click to View Source -

AI News Recommendations

Microsoft Signs AI Licensing Agreement with People Inc: A New Model for Content Paywalls Faces Google's Traffic Impact

Microsoft Signs AI Licensing Agreement with People Inc: A New Model for Content Paywalls Faces Google's Traffic Impact

People Inc. partners with Microsoft for AI content licensing, joining the 'Publisher Content Marketplace' and adopting a pay-as-you-go model to expand AI content commercialization.....

Nov 5, 2025
50
China Huadian Launches 'Huadian Zhi' Large Model, Energy Management Enters a New Intelligent Era

China Huadian Launches 'Huadian Zhi' Large Model, Energy Management Enters a New Intelligent Era

China Huadian launched the 'Huadian Zhi' large model at the 2025 New Power System Forum, achieving breakthroughs in artificial intelligence and predictive applications. The model pioneered runoff prediction technology, increasing the water energy utilization rate of the Wujiang River Basin from 5.8% to 10.8%, promoting the intelligent transformation of the power industry.

Nov 5, 2025
50
AntGroup Launches Multilingual Visual Large Model Training Framework to Break Language Barriers!

AntGroup Launches Multilingual Visual Large Model Training Framework to Break Language Barriers!

AntGroup introduced a multilingual multimodal large model training framework at the Hong Kong FinTech Festival, breaking through the bottlenecks of multilingual applications. This technology targets small languages such as Egyptian Arabic, and through a language-aware optimization framework, it achieves a 'thinking in the target language' mechanism, improving the training effectiveness for resource-scarce languages.

Nov 4, 2025
90
Apple Siri Will Undergo a Major Transformation! Paying Google to Help with AI Upgrades

Apple Siri Will Undergo a Major Transformation! Paying Google to Help with AI Upgrades

Apple faced obstacles in developing its own Siri large model, so it turned to a collaboration with Google, adopting a customized Gemini language model to enhance AI capabilities. The new strategy will adopt an 'edge-cloud collaboration' hybrid model, combining the advantages of cloud-based large models with local data privacy protection, aiming to optimize user experience and address shortcomings in handling complex tasks.

Nov 4, 2025
70
Microsoft Azure ND GB300 Sets New Records: 1.1 Million Tokens per Second for Inference

Microsoft Azure ND GB300 Sets New Records: 1.1 Million Tokens per Second for Inference

Microsoft Azure's ND GB300v6 VM set a new record of 1.1M tokens/sec for Llama2 70B inference, powered by NVIDIA's GB300NVL72 system with 72 Blackwell Ultra GPUs and 36 Grace CPUs, showcasing Microsoft's AI scaling expertise.....

Nov 4, 2025
120
Large Models Are Disrupting Financial Services: Du Xiaoman CEO Reveals How AI Is Helping Promote Inclusive Finance

Large Models Are Disrupting Financial Services: Du Xiaoman CEO Reveals How AI Is Helping Promote Inclusive Finance

The 2025 Hong Kong Fintech Week focuses on the integration of fintech and AI, bringing together guests such as Carrie Lam and Geoffrey Hinton. Zhu Guang, CEO of Du Xiaoman, emphasized the innovative applications of large models in financial services, driving customer service from monthly surveys to real-time responses, achieving a revolutionary transformation centered around customer-centricity.

Nov 4, 2025
110
ByteDance AI Programming Tool Trae Removes Claude Model Pro Membership Compensation

ByteDance AI Programming Tool Trae Removes Claude Model Pro Membership Compensation

ByteDance's Trae AI tool discontinues Claude model support due to outages. Pro members receive 50% extra fast requests monthly until Jan 31, 2026, totaling 300 monthly requests as compensation.....

Nov 4, 2025
270
AI Daily: Kunlun Tech SkyReels V3 Model Released; Moonshot AI Launches Kimi Linear Model; MiniMax Music 2.0 Released

AI Daily: Kunlun Tech SkyReels V3 Model Released; Moonshot AI Launches Kimi Linear Model; MiniMax Music 2.0 Released

Kunlun Wanwei's SkyReels V3 integrates top AI video tech like Sora2 and Veo3.1, offering a one-stop tool for developers to explore trends and innovate applications.....

Nov 4, 2025
120
Ant Group Launches Multilingual Visual Large Model Training Framework for Efficient Identification of Document Forgery and Logical Contradictions

Ant Group Launches Multilingual Visual Large Model Training Framework for Efficient Identification of Document Forgery and Logical Contradictions

Ant Digital launches a multilingual multimodal training framework at Hong Kong FinTech Week to enhance AI's performance in diverse languages, overcoming traditional models' limitations in global applications.....

Nov 4, 2025
130
Grab's Self-Developed Language Model Solves Asian Language Recognition Challenges

Grab's Self-Developed Language Model Solves Asian Language Recognition Challenges

Grab develops its own language model to enhance Southeast Asian multilingual understanding, addressing limitations in existing models for non-Latin scripts in compliance tasks like identity verification.....

Nov 4, 2025
110
LanguageIllegal ActivitiesFraudPornographyPrivacy Violence
GPT-4Mistral-8x22BQwen2-72B-InstructGPT-4Mistral-8x22BQwen2-72B-InstructGPT-4Mistral-8x22BQwen2-72B-InstructGPT-4Mistral-8x22BQwen2-72B-Instruct
Chinese0%13%0%0%17%0%43%47%53%0%10%0%
English0%7%0%0%23%0%37%67%63%0%27%3%
Spanish0%13%0%0%7%0%15%26%15%3%13%0%
Portuguese0%7%0%3%0%0%48%64%50%3%7%3%
French0%3%0%3%3%7%3%19%7%0%27%0%
Korean0%4%0%3%8%4%17%29%10%0%26%4%
Japanese0%7%0%3%7%3%47%57%47%4%26%4%
Russian0%10%0%7%23%3%13%17%10%13%7%7%
Arabic0%4%0%4%11%0%22%26%22%0%0%0%
Average0%8%0%3%11%2%27%39%31%3%