llava-llama-3-8b-v1_1 is an optimized LLaVA model by XTuner, based on meta-llama/Meta-Llama-3-8B-Instruct and CLIP-ViT-Large-patch14-336. It has been fine-tuned with ShareGPT4V-PT and InternVL-SFT. Designed for the combination of image and text processing, the model features strong multimodal learning capabilities and is suitable for various downstream deployment and evaluation toolkits.