Taobao TIAN Group, in collaboration with Aicheng Technology, has open-sourced the large-scale model training framework Megatron-LLaMA. This initiative aims to enhance the training performance of large language models, reduce training costs, and maintain compatibility with the LLaMA community. The framework achieves a 176% speedup in 32-card training scenarios and exhibits high tolerance for network instability. Megatron-LLaMA will focus on adaptive optimal configuration selection, support for model structure modifications, and the delivery of top-tier performance training solutions across various hardware environments.
Taotian Group Collaborates with Aicheng Technology to Open Source the Megatron-LLaMA Large Model Training Framework
机器之心
38
© Copyright AIbase Base 2024, Click to View Source - https://www.aibase.com/news/1321