OLMo-2-1124-7B-RM
A large language model for text generation and classification.
CommonProductProductivityArtificial IntelligenceNatural Language Processing
OLMo-2-1124-7B-RM is a large language model co-developed by Hugging Face and Allen AI, focused on text generation and classification tasks. Built on a 7 billion parameter scale, the model is designed to tackle a diverse range of language tasks, including chat, mathematical problem-solving, and text classification. It is a reward model trained on the Tülu 3 dataset and preference datasets, used to initialize the value model in RLVR training. The release of the OLMo series aims to advance scientific research in language modeling, promoting model transparency and accessibility through open-source code, checkpoints, logs, and related training details.
OLMo-2-1124-7B-RM Visit Over Time
Monthly Visits
20899836
Bounce Rate
46.04%
Page per Visit
5.2
Visit Duration
00:04:57