TaoTian Group, in collaboration with iOrange Technology, has officially open-sourced the large-scale model training framework Megatron-LLaMA, aimed at enhancing the performance of large language model training and reducing training costs. Tests have shown that this framework achieves a 176% acceleration effect on 32-card training and exhibits linear scalability. The framework has been open-sourced on GitHub and will continue to monitor community development, promoting adaptive configurations and support for more models. Additionally, Megatron-LLaMA has improved the gradient aggregation mechanism and optimized the backpropagation process. This open-source framework lowers the barriers to training large models and makes a significant contribution to the open-source community.