Translated data: Google has unveiled a visual language model named PaLI-3, which, despite having only 5 billion parameters, excels in multimodal tests. This achievement is attributed to the application of the SigLIP method, making small models suitable for training and deployment. This breakthrough could potentially drive the development of next-generation large-scale VLM (Visual Language Models).