Tencent has recently upgraded its machine learning framework, Angel, successfully boosting the efficiency of large-scale model training by 2.6 times. By adopting multi-dimensional parallel optimization storage, the AngelPTM framework has enhanced the stability of large-scale model training. The introduction of the AngelHCF inference framework has increased inference speed by 1.3 times. Training of large models with up to a trillion parameters can save 50% on computational cost, supporting ultra-large-scale training of up to 10,000 cards for a single task. Over 300 businesses have already integrated the Hunyuan large model, fully driving the development of large-scale model applications.
Tencent Upgrades Angel Framework, Improving Large Model Training Efficiency by 2.6 Times

站长之家
This article is from AIbase Daily
Welcome to the [AI Daily] column! This is your daily guide to exploring the world of artificial intelligence. Every day, we present you with hot topics in the AI field, focusing on developers, helping you understand technical trends, and learning about innovative AI product applications.