imp-v1-3b
A powerful multimodal small language model.
CommonProductProgrammingMultimodalLanguage Model
The Imp project aims to provide a series of powerful multimodal small language models (MSLMs). Our imp-v1-3b is a powerful 3-billion parameter MLM built upon a small but powerful SLM Phi-2 (2.7 billion) and a powerful visual encoder SigLIP (400 million), trained on the LLaVA-v1.5 training dataset. Imp-v1-3b significantly outperforms similar-sized models on various multimodal benchmark tests, even showing slight superiority over the powerful LLaVA-7B model on some multimodal benchmarks.
imp-v1-3b Visit Over Time
Monthly Visits
20899836
Bounce Rate
46.04%
Page per Visit
5.2
Visit Duration
00:04:57