Recently, Huggingface, the world's largest open-source AI community, released the latest Open LLM Leaderboard, which shows that all of the top ten open-source large models are derivative models trained based on Alibaba's Tongyi Qwen open-source model. This achievement marks Qwen's dominant position in the open-source AI field and further enhances its global influence.
The Open LLM Leaderboard is widely regarded as the most authoritative ranking of open-source large models today, covering multiple testing dimensions, including reading comprehension, logical reasoning, mathematical computation, and factual Q&A. Surprisingly, Tongyi Qwen has developed into the world's largest open-source model family, with the number of its derivative models exceeding 90,000, surpassing Meta's Llama series and ranking first globally. In Huggingface's 2024 statistics on open-source model downloads, the Qwen2.5-1.5B-Instruct model from the Qwen series accounted for 26.6% of total downloads, making it the most downloaded open-source model in the world.
Additionally, the popular company DeepSeek recently open-sourced six models based on its R1 inference model, four of which were developed based on Qwen. The team of renowned AI scientist Fei-Fei Li has also successfully trained the s1 inference model using Qwen as the foundation, utilizing fewer resources and data. This series of achievements further demonstrates the superiority and flexibility of the Qwen model.
In summary, Alibaba's Tongyi Qwen has rapidly risen in the field of open-source large models, enhancing its brand influence and providing global developers with rich tools and resources. As open-source technology continues to evolve, future AI applications will become even more diverse and intelligent.