SmolLM2 is a series of lightweight language models, featuring versions with 135M, 360M, and 1.7B parameters. These models effectively handle a wide range of tasks while maintaining a lightweight profile, particularly for device deployment. The 1.7B version shows significant improvements over its predecessor, SmolLM1-1.7B, in instruction-following, knowledge, reasoning, and mathematics. It has been trained on multiple datasets, including FineWeb-Edu, DCLM, and The Stack, and has undergone Direct Preference Optimization (DPO) using UltraFeedback. The model also supports tasks such as text rewriting, summarization, and functional invocation.