Recently, NVIDIA launched a new generation of open visual language model - NVILA. This model aims to optimize accuracy and efficiency, establishing itself as a leader in the field of visual AI with its outstanding performance.
According to NVIDIA, NVILA has reduced training costs by 4.5 times, the memory required for fine-tuning has decreased by 3.4 times, and the latency for pre-filling and decoding has nearly halved. These figures were derived from comparisons with another large visual model, LLaVa OneVision.
In video benchmark tests, NVILA outperformed GPT4o Mini and also showed excellent performance compared to GPT4o, Sonnet3.5, and Gemini1.5Pro. Additionally, NVILA achieved a slight victory in comparison with Llama3.2. Nevertheless, NVIDIA stated that the model has not yet been released on the Hugging Face platform, but they promise to release the code and model soon to facilitate the model's reproducibility.
NVIDIA pointed out that the cost of training visual language models is very high; training a 7B parameter visual language model requires approximately 400 GPU days. Moreover, fine-tuning such models is also very memory-intensive, with a 7B parameter model requiring over 64GB of GPU memory.
To address this, NVIDIA adopted a technique called "expand then compress," aimed at balancing the model's accuracy and efficiency. The model does not reduce the size of photos and videos as input but uses high-resolution images and multiple frames from videos to ensure no detail is lost.
During the compression process, the model reduces the size of input data by compressing visual information into fewer tokens and grouping pixels to retain important information. NVIDIA mentioned in the paper that doubling the resolution would double the number of visual tokens, which would increase training and inference costs by more than two times. Therefore, they reduced these costs by compressing spatial/time tokens.
NVIDIA also demonstrated the model's performance, showing that NVILA can answer multiple queries based on a single image or video. Its output was compared with NVIDIA's previously released VILA1.5 model. Furthermore, NVIDIA detailed several other technologies, such as dynamic S2 expansion, DeltaLoss-based dataset pruning, and quantization using FP8 precision.
These technologies are applied to an 8B parameter model, and specific details can be found on Arxiv.
Paper link: https://arxiv.org/pdf/2412.04468
Key Points:
🌟 The NVILA model reduces training costs by 4.5 times, enhancing the efficiency of visual AI.
📉 NVILA ensures the integrity of input information through high-resolution images and video frames.
📊 NVIDIA promises to release the code and model soon to promote the reproducibility of research.