InternVL2_5-78B-MPO

This is an advanced series of multimodal large language models that demonstrate outstanding overall performance.

CommonProductProductivityMultimodalLarge Language Model
InternVL2.5-MPO is a series of multimodal large language models based on InternVL2.5 and Mixed Preference Optimization (MPO). It excels in multimodal tasks by integrating the recently incrementally pre-trained InternViT with various pre-trained large language models (LLMs) such as InternLM 2.5 and Qwen 2.5, utilizing a randomly initialized MLP projector. This model series has been trained on the multimodal reasoning preference dataset MMPR, which contains approximately 3 million samples, enhancing the model's reasoning capabilities and answer quality through an effective data construction process and mixed preference optimization techniques.
Visit

InternVL2_5-78B-MPO Visit Over Time

Monthly Visits

21315886

Bounce Rate

45.50%

Page per Visit

5.2

Visit Duration

00:05:02

InternVL2_5-78B-MPO Visit Trend

InternVL2_5-78B-MPO Visit Geography

InternVL2_5-78B-MPO Traffic Sources

InternVL2_5-78B-MPO Alternatives