Translated data: The FP8-LM framework, open-sourced by Microsoft, has achieved significant results in the training of large language models. This framework employs FP8 mixed-precision training, which is 64% faster than BF16 when training the GPT-175B model and reduces memory usage by 42%. Utilizing the FP8-LM framework can easily enhance the size of trainable models, marking a significant breakthrough in the field of large model training.