The National Supercomputing Internet Platform recently announced the launch of the "AI Ecosystem Partner Acceleration Program," offering a series of benefits for enterprise users, including three months of free access to the DeepSeek API and support from a computing power resource pool with tens of millions of core hours.
It is reported that the platform has completed the deployment of the full version of the DeepSeek model. Currently, the platform has achieved interconnectivity with over 20 supercomputing and intelligent computing centers across 14 provinces in the country, gathering more than 6,500 types of computing power products, including nearly 240 AI model services. These models include domestic open-source models like DeepSeek and Qwen, as well as internationally renowned open-source AI models such as Llama, Stable Diffusion, and Gemma.
In terms of specific deployment capabilities, the platform supports multiple versions of the DeepSeek-R1 model, including specifications of 1.5B, 8B, 70B, and 671B, which can be used for rapid deployment of private API services. Additionally, the R1 series versions 7B, 14B, and 32B also support the deployment of AI web applications, providing enterprise users with more flexible application scenario options.
The launch of this acceleration program demonstrates the National Supercomputing Platform's proactive efforts in promoting the popularization and application of AI technology, which is expected to help more enterprises lower the barriers to AI application and accelerate the digital transformation of industries.