OLMo-2-1124-13B-DPO

High-performance English language model suitable for diverse tasks.

CommonProductProgrammingLanguage ModelNatural Language Processing
OLMo-2-1124-13B-DPO is a 13 billion parameter large language model that has undergone supervised fine-tuning and DPO training. It primarily targets English and aims to provide exceptional performance across various tasks such as chat, mathematics, GSM8K, and IFEval. This model is part of the OLMo series, designed to advance scientific research in language models. The training is based on the Dolma dataset, and the code, checkpoints, logs, and training details are publicly available.
Visit

OLMo-2-1124-13B-DPO Visit Over Time

Monthly Visits

19075321

Bounce Rate

45.07%

Page per Visit

5.5

Visit Duration

00:05:32

OLMo-2-1124-13B-DPO Visit Trend

OLMo-2-1124-13B-DPO Visit Geography

OLMo-2-1124-13B-DPO Traffic Sources

OLMo-2-1124-13B-DPO Alternatives