MoE-LLaVA

An expert mixture model based on large-scale vision-language models

CommonProductImageLarge Scale ModelMulti-modal Learning
MoE-LLaVA is an expert mixture model based on large-scale vision-language models, demonstrating excellent performance in multi-modal learning. It has fewer parameters but exhibits high performance and can be trained in a short time. The model supports Gradio Web UI and CLI inference, and provides functions such as model library, requirements and installation, training and validation, customization, visualization, and API.
Visit

MoE-LLaVA Visit Over Time

Monthly Visits

494758773

Bounce Rate

37.69%

Page per Visit

5.7

Visit Duration

00:06:29

MoE-LLaVA Visit Trend

MoE-LLaVA Visit Geography

MoE-LLaVA Traffic Sources

MoE-LLaVA Alternatives