CogVLM2

Second-generation multimodal pre-trained dialogue model

CommonProductProductivityMultimodalPre-trained model
CogVLM2, developed by a team from Tsinghua University, is a second-generation multimodal pre-trained dialogue model. It has achieved significant improvements in multiple benchmark tests, supports 8K content length and a resolution of 1344*1344 for images. CogVLM2 offers both Chinese and English versions that are open-source, achieving performance comparable to some non-open-source models.
Visit

CogVLM2 Visit Over Time

Monthly Visits

494758773

Bounce Rate

37.69%

Page per Visit

5.7

Visit Duration

00:06:29

CogVLM2 Visit Trend

CogVLM2 Visit Geography

CogVLM2 Traffic Sources

CogVLM2 Alternatives