Shenzhen Yuanxiang Information Technology Co., Ltd. recently announced that it has successfully released XVERSE-MoE-A36B, the largest Mixture of Experts (MoE) open-source large model in China. This release marks a significant advancement in the AI field in China, elevating domestic open-source technology to a globally leading level.

The XVERSE-MoE-A36B model boasts a total of 255 billion parameters and 36 billion active parameters, rivaling models with over 100 billion parameters in performance, achieving a leap in performance across levels. The model reduces training time by 30% and improves inference performance by 100%, significantly lowering the cost per token, making low-cost deployment of AI applications possible.

WeChat Screenshot_20240913110614.png

Yuanxiang XVERSE's "High-Performance Toolkit" series models are fully open-source and available for commercial use without any conditions, providing more options for numerous small and medium-sized enterprises, researchers, and developers. The MoE architecture, by combining multiple expert models from different fields, breaks through the limitations of traditional scaling laws, maximizing model performance while expanding the model scale and reducing training and inference computational costs.

In several authoritative evaluations, the performance of Yuanxiang MoE significantly surpasses that of several similar models, including the domestic trillion-parameter MoE model Skywork-MoE, the traditional MoE leader Mixtral-8x22B, and the 314 billion parameter open-source MoE model Grok-1-A86B.

Free Download of Large Models