Early this morning, the Alibaba Tongyi Qianwen team released the Qwen2 series of open-source models. This series includes five sizes of pre-trained and instruction-tuned models: Qwen2-0.5B, Qwen2-1.5B, Qwen2-7B, Qwen2-57B-A14B, and Qwen2-72B. Key information indicates that these models have significantly improved in terms of parameter count and performance compared to the previous generation, Qwen1.5.

Regarding the multilingual capabilities of the models, the Qwen2 series has invested heavily in increasing the quantity and quality of the dataset, covering 27 other languages besides English and Chinese. Comparative testing has shown that large models (with over 70B parameters) excel in natural language understanding, coding, mathematical abilities, and more. The Qwen2-72B model has even surpassed its predecessor in terms of performance and parameter count.

The Qwen2 models not only demonstrate strong capabilities in basic language model evaluations but also achieve remarkable results in instruction-tuned model assessments. Their multilingual abilities shine in benchmarks like M-MMLU and MGSM, showcasing the powerful potential of Qwen2 instruction-tuned models.

The release of the Qwen2 series marks a new height in artificial intelligence technology, providing broader possibilities for global AI applications and commercialization. Looking ahead, Qwen2 will further expand model sizes and multimodal capabilities, accelerating the development of the open-source AI field.

Model Information

The Qwen2 series includes five sizes of base and instruction-tuned models, including Qwen2-0.5B, Qwen2-1.5B, Qwen2-7B, Qwen2-57B-A14B, and Qwen2-72B. We have outlined the key information for each model in the table below:

ModelsQwen2-0.5BQwen2-1.5BQwen2-7BQwen2-57B-A14BQwen2-72B
# Parameters0.49M1.54M7.07B57.41B72.71B
# Non-Emb Parameters0.35M1.31B5.98M56.32M70.21B
Quality AssuranceTrueTrueTrueTrueTrue
Tie EmbeddingTrueTrueFalseFalseFalse
Context Length32K32K128K64K128K

Specifically, in Qwen1.5, only Qwen1.5-32B and Qwen1.5-110B used Group Query Attention (GQA). This time, we applied GQA to all model sizes to enable them to benefit from faster speeds and less memory usage during model inference. For smaller models, we prefer to apply tying embedding because large sparse embeddings account for a significant portion of the model's total parameters.

In terms of context length, all base language models have been pre-trained on data with a context length of 32K tokens, and we have observed satisfactory extrapolation capabilities up to 128K in PPL evaluations. However, for instruction-tuned models, we are not satisfied with just PPL evaluations; we need the models to correctly understand long contexts and complete tasks. In the table, we list the context length capabilities of the instruction-tuned models, which are evaluated through assessments on the Needle in a Haystack task. Notably, when enhanced with YARN, the Qwen2-7B-Instruct and Qwen2-72B-Instruct models both exhibit impressive capabilities, able to handle context lengths of up to 128K tokens.

We have made significant efforts to increase the quantity and quality of the pre-training and instruction-tuning datasets, which cover multiple languages besides English and Chinese, to enhance their multilingual capabilities. Although large language models inherently have the ability to generalize to other languages, we explicitly emphasize that we have included 27 other languages in our training:

RegionLanguages
Western EuropeGerman, French, Spanish, Portuguese, Italian, Dutch
Eastern Europe and Central EuropeRussian, Czech, Polish
Middle EastArabic, Persian, Hebrew, Turkish
East AsiaJapanese, Korean
Southeast AsiaVietnamese, Thai, Indonesian, Malay, Lao, Burmese, Cebuano, Khmer, Tagalog
South AsiaHindi, Bengali, Urdu

Additionally, we have invested considerable effort in addressing the issue of code-switching that often arises in multilingual evaluations. Therefore, our models' ability to handle this phenomenon has significantly improved. Evaluations using prompts that typically trigger cross-language code-switching have confirmed a significant reduction in related issues.

Performance

Comparative test results show that the performance of large-scale models (with over 70B parameters) has significantly improved compared to Qwen1.5. This test centers on the large-scale model Qwen2-72B. In terms of base language models, we compared the performance of Qwen2-72B with the current best open-source models in natural language understanding, knowledge acquisition, programming abilities, mathematical abilities, multilingual abilities, and more. Thanks to carefully selected datasets and optimized training methods, Qwen2-72B outperforms leading models like Llama-3-70B, and even surpasses the previous generation Qwen1.5-110B with fewer parameters.

After extensive large-scale pre-training, we conducted post-training to further enhance Qwen's intelligence, bringing it closer to human capabilities. This process further improved the model's abilities in coding, mathematics, reasoning, instruction following, multilingual understanding, and more. Additionally, it aligns the model's outputs with human values, ensuring they are useful, honest, and harmless. Our post-training phase is designed with principles of scalable training and minimal human annotation. Specifically, we researched how to obtain high-quality, reliable, diverse, and creative demonstration data and preference data through various automatic alignment strategies, such as rejection sampling for mathematics, execution feedback for coding and instruction following, back-translation for creative writing, and scalable supervision for role-playing. As for training, we combined supervised fine-tuning, reward model training, and online DPO training. We also adopted a novel online merging optimizer to minimize the alignment tax. These combined efforts significantly enhanced the capabilities and intelligence of our models, as shown in the table below.

We conducted a comprehensive evaluation of Qwen2-72B-Instruct across 16 benchmarks in various fields. Qwen2-72B-Instruct achieved a balance between better capabilities and alignment with human values. Specifically, Qwen2-72B-Instruct significantly outperformed Qwen1.5-72B-Chat in all benchmarks and achieved competitive performance compared to Llama-3-70B-Instruct.

On smaller models, our Qwen2 models also outperform similar or even larger SOTA models. Compared to the recently released SOTA models, Qwen2-7B-Instruct still shows an advantage in various benchmarks, especially in coding and Chinese-related metrics.

Emphasis

Coding and Mathematics

We have always been committed to enhancing Qwen's advanced features, especially in coding and mathematics. In coding, we successfully integrated CodeQwen1.5's code training experience and data, resulting in significant improvements in Qwen2-72B-Instruct's capabilities in various programming languages. In mathematics, by leveraging extensive and high-quality datasets, Qwen2-72B-Instruct has demonstrated stronger abilities in solving mathematical problems.

Long Context Understanding

In Qwen2, all instruction-tuned models have been trained in a 32k length context and use technologies like YARN or Dual Chunk Attention to infer to longer context lengths.

The chart below shows our test results on Needle in a Haystack. Notably, Qwen2-72B-Instruct can perfectly handle information extraction tasks in a 128k context, coupled with its inherent strong performance, making it the preferred choice for handling long text tasks when resources are sufficient.

Additionally, it is worth noting the impressive capabilities of the other models in the series: Qwen2-7B-Instruct almost perfectly handles a context length of up to 128k, Qwen2-57B-A14B-Instruct manages up to 64k, and the two smaller models in the series support 32k.

In addition to long context models, we have also open-sourced an agent solution for efficiently processing documents containing up to 1 million tokens. For more details, please refer to our dedicated blog post on this topic.

Safety and Responsibility

The table below shows the proportion of harmful responses generated by large models for four types of multilingual unsafe queries (illegal activities, fraud, pornography, privacy violence). The test data comes from Jailbreak and is translated into multiple languages for evaluation. We found that Llama-3 cannot effectively handle multilingual prompts, so it was not included in the comparison. Through significance testing (P_value), we found that the Qwen2-72B-Instruct model's performance in terms of safety is comparable to GPT-4 and significantly better than the Mistral-8x22B model.

This article is from AIbase Daily

Welcome to the [AI Daily] column! This is your daily guide to exploring the world of artificial intelligence. Every day, we present you with hot topics in the AI field, focusing on developers, helping you understand technical trends, and learning about innovative AI product applications.

—— Created by the AIbase Daily Team
© Copyright AIbase Base 2024, Click to View Source -

AI News Recommendations

A Daily: Kimi Open Platform Launches Kimi Playground; OpenAI Unveils Major Release of ChatGPT Agent; Suno Introduces Voice Replacement Feature

A Daily: Kimi Open Platform Launches Kimi Playground; OpenAI Unveils Major Release of ChatGPT Agent; Suno Introduces Voice Replacement Feature

[AI Daily Summary] Today's AI field saw multiple breakthroughs: 1) Moonshot AI's Kimi Open Platform launched Playground, upgrading AI from a conversational assistant to an intelligent assistant; 2) OpenAI released ChatGPT Agent, capable of performing tasks autonomously; 3) Suno v4.5+ introduced innovative music features such as voice replacement; 4) Google's Veo3 video generation model opened its API but at a high cost; 5) The first real-time video conversion AI model MirageLSD was introduced; 6) VSC

Jul 18, 2025
160
LTX-Video 13B Released! Generate High-Definition Videos 30 Times Faster, Open Source AI Makes Creation Boundless!

LTX-Video 13B Released! Generate High-Definition Videos 30 Times Faster, Open Source AI Makes Creation Boundless!

Lightricks releases open-source LTX-Video13B, a 13B-parameter video generation model with multi-scale rendering, achieving 30x faster speeds. It runs on consumer GPUs, supports 1216×704 real-time generation, and offers text/image/video-to-video modes. The model enhances coherence and detail, enabling keyframe control and style transfer. Free for SMEs, it includes training tools and optimized versions to democratize AI video creation.....

Jul 18, 2025
210
Tesla Dojo 2 Chip to Enter Mass Production, Performance Approaches NVIDIA, Musk Jokes It Will Change the Game

Tesla Dojo 2 Chip to Enter Mass Production, Performance Approaches NVIDIA, Musk Jokes It Will Change the Game

Tesla has released its new Dojo 2 chip, with performance 10 times that of the first generation, and computing power close to NVIDIA B200. The chip is manufactured by TSMC and uses advanced packaging technology, solving power consumption issues. Dojo 2 will assist Tesla's FSD autonomous driving system training, processing 16 billion video frames per day, achieving technological self-sufficiency. Musk revealed that next year a more powerful Dojo 3 will be launched, and he joked that Dojo 2 can run 'Crysis' at one billion frames per second. This breakthrough will reduce Tesla's reliance on NVIDIA and may be offered externally.

Jul 18, 2025
160
Lightricks Releases LTXV Model Update: Breakthrough in Image-to-Video Generation Beyond 60 Seconds

Lightricks Releases LTXV Model Update: Breakthrough in Image-to-Video Generation Beyond 60 Seconds

Jul 18, 2025
120
Aider Leaderboard Publishes Test Results, Kimi K2 Programming Ability Is Comparable to Qwen3-235B-A22B

Aider Leaderboard Publishes Test Results, Kimi K2 Programming Ability Is Comparable to Qwen3-235B-A22B

Moonshot AI's open-source model Kimi K2 excels in programming, matching Qwen3-235B-A22B and nearing o3-mini-high & Claude-3.7-Sonnet. With 1T MoE, 128k context, 65.8% accuracy on SWE-bench, and $0.14/M tokens, it's ideal for coding. Supports web generation, workflows, vLLM/Hugging Face deployment, MIT-licensed.....

Jul 18, 2025
210
Head of ByteDance's Visual Large Model, Yang Jianchao, Announces Temporary Leave; Zhou Chang Takes Over, Drawing Attention

Head of ByteDance's Visual Large Model, Yang Jianchao, Announces Temporary Leave; Zhou Chang Takes Over, Drawing Attention

Yang Jianchao, the head of ByteDance's Visual Large Model team, announced a temporary leave due to family reasons, with Zhou Chang, former technical leader of Alibaba's Tongyi Qianwen, taking over. This personnel change occurred during a period of adjustment in ByteDance's AI department, sparking concerns about the stability of the technical roadmap. Yang Jianchao's work information remains in the internal system, and Zhou Chang will lead the global Seed team to continue research on visual multimodal generation. The company emphasized its continued investment in basic research and hopes that the new leader will bring innovative energy. This change highlights the importance of balancing work and health in the fast-paced tech industry.

Jul 18, 2025
110
5.63% Error Rate Sets New Low: NVIDIA AI Launches Commercial-Grade Ultra-High-Speed Speech Recognition Model Canary-Qwen-2.5B

5.63% Error Rate Sets New Low: NVIDIA AI Launches Commercial-Grade Ultra-High-Speed Speech Recognition Model Canary-Qwen-2.5B

NVIDIA's Canary-Qwen-2.5B sets a 5.63% WER record on Hugging Face OpenASR. This CC-BY licensed model combines FastConformer encoder with Qwen3-1.7B LLM decoder for efficient speech-to-text and NLP. Supports multi-GPU deployment for cloud/edge applications.....

Jul 18, 2025
230
VSCode's AI Programming Tool Traycer Excels at Handling Large Codebases

VSCode's AI Programming Tool Traycer Excels at Handling Large Codebases

Traycer, a VSCode AI assistant, enhances coding with task breakdown, multi-agent collaboration, and real-time error detection. Offers 14-day trial, excels in large codebases.....

Jul 18, 2025
100
Liangxin Technology Launches AI Energy Big Model, Power Trading Will Achieve Intelligence

Liangxin Technology Launches AI Energy Big Model, Power Trading Will Achieve Intelligence

Lanxin Tech unveiled 'Lanxin Jiugong AI Energy Model' at the Chain Expo, featuring core technologies like a time-series prediction engine with 90% accuracy and an AI agent engine for real-time monitoring and strategy generation, now applied in Guangdong, Shandong, and Zhejiang.....

Jul 18, 2025
80
First Live Streaming Diffusion AI Model MirageLSD Makes a Stunning Debut, Opening Infinite Possibilities for Real-Time Video Conversion!

First Live Streaming Diffusion AI Model MirageLSD Makes a Stunning Debut, Opening Infinite Possibilities for Real-Time Video Conversion!

MirageLSD, the world's first AI real-time video conversion model with 40ms latency, enables instant scene/outfit changes via gestures. Applied in gaming/livestreaming, its LSD model uses Diffusion Forcing to eliminate long-generation errors.....

Jul 18, 2025
170
LanguageIllegal ActivitiesFraudPornographyPrivacy Violence
GPT-4Mistral-8x22BQwen2-72B-InstructGPT-4Mistral-8x22BQwen2-72B-InstructGPT-4Mistral-8x22BQwen2-72B-InstructGPT-4Mistral-8x22BQwen2-72B-Instruct
Chinese0%13%0%0%17%0%43%47%53%0%10%0%
English0%7%0%0%23%0%37%67%63%0%27%3%
Spanish0%13%0%0%7%0%15%26%15%3%13%0%
Portuguese0%7%0%3%0%0%48%64%50%3%7%3%
French0%3%0%3%3%7%3%19%7%0%27%0%
Korean0%4%0%3%8%4%17%29%10%0%26%4%
Japanese0%7%0%3%7%3%47%57%47%4%26%4%
Russian0%10%0%7%23%3%13%17%10%13%7%7%
Arabic0%4%0%4%11%0%22%26%22%0%0%0%
Average0%8%0%3%11%2%27%39%31%3%