The AI2 team has introduced the Open Language Model framework, OLMo, designed to facilitate the research and experimentation of large-scale language models. This framework offers training code, models, and evaluation code on Hugging Face and GitHub, enabling the academic community and researchers to collectively delve into the science of language models. It explores the impact of new pre-training data subsets on downstream performance and investigates novel pre-training methodologies and stability.