Qwen2.5-Coder represents a series of large language models optimized for code generation, covering six mainstream model sizes with 0.5, 1.5, 3, 7, 14, and 32 billion parameters, catering to the diverse needs of developers. Qwen2.5-Coder shows significant improvements in code generation, inference, and debugging, trained on a robust Qwen2.5 backbone with a token expansion to 5.5 trillion, including source code, text grounding, and synthetic data, making it one of the most advanced open-source code LLMs, with coding capabilities comparable to GPT-4o. Additionally, Qwen2.5-Coder offers a more comprehensive foundation for applications in real-world scenarios such as code agents.