imp-v1-3b

A powerful multimodal small language model.

CommonProductProgrammingMultimodalLanguage Model
The Imp project aims to provide a series of powerful multimodal small language models (MSLMs). Our imp-v1-3b is a powerful 3-billion parameter MLM built upon a small but powerful SLM Phi-2 (2.7 billion) and a powerful visual encoder SigLIP (400 million), trained on the LLaVA-v1.5 training dataset. Imp-v1-3b significantly outperforms similar-sized models on various multimodal benchmark tests, even showing slight superiority over the powerful LLaVA-7B model on some multimodal benchmarks.
Visit

imp-v1-3b Visit Over Time

Monthly Visits

18200568

Bounce Rate

44.11%

Page per Visit

5.8

Visit Duration

00:05:46

imp-v1-3b Visit Trend

imp-v1-3b Visit Geography

imp-v1-3b Traffic Sources

imp-v1-3b Alternatives