Recently, the AI field has been shaken up again. Moonshot announced the open-sourcing of its new optimizer, Muon, achieving twice the computational efficiency of the traditional AdamW. This launch coincides with DeepSeek's upcoming release of multiple codebases, sparking significant industry attention and discussion.
The Muon optimizer was initially proposed by OpenAI researchers Keller Jordan et al. in 2024, demonstrating excellent performance in training small-scale models. However, with increasing model size, the original Muon encountered performance bottlenecks. To address this, the Moonshot team implemented significant technical improvements, primarily adding weight decay and consistent root mean square (RMS) updates. This allows Muon to be applied in large-scale training without hyperparameter tuning.
The new Muon optimizer has been applied to the newly released Moonlight model, a Mixture-of-Experts (MoE) model with 3B/16B parameters. After training on 5.7 trillion tokens, its performance has significantly improved, placing it on the current Pareto frontier. This achievement signifies that, with the same training budget, the Moonlight model surpasses other models across all performance metrics.
Moonshot has also open-sourced Muon's implementation code and released corresponding pre-trained and intermediate checkpoints, providing valuable resources for further research. Studies show that Muon requires only 52% of the FLOPs compared to AdamW during training, further validating its efficiency in large-scale language model training.
Moonshot's Muon optimizer not only surpasses traditional optimizers in performance but also injects new vitality into the entire AI field through open-sourcing. With the increasing participation of researchers and developers, this optimizer is expected to further advance artificial intelligence technology.
Paper Link: https://github.com/MoonshotAI/Moonlight/blob/master/Moonlight.pdf