At the recent Gemma Developer Day held in Tokyo, Google officially introduced the new Japanese version of the Gemma AI model. This model's performance is on par with GPT-3.5, yet it boasts a mere 2 billion parameters, making it highly compact and suitable for running on mobile devices.

Gemma

The newly released Gemma model excels in Japanese language processing while maintaining its capabilities in English. This is particularly significant for smaller models, as they may face the issue of "catastrophic forgetting" during fine-tuning for new languages, where newly acquired knowledge overwrites previously learned information. However, Gemma has successfully overcome this challenge, demonstrating robust language processing capabilities.

Notably, Google has also immediately released the model's weights, training materials, and examples through platforms like Kaggle and Hugging Face, aiding developers in getting started faster. This means developers can easily utilize this model for local computations, especially in edge computing applications, opening up more possibilities.

To encourage more international developers, Google has launched a competition called "Unlock Global Communication with Gemma," with prizes totaling $150,000. This initiative aims to help developers adapt the Gemma model to local languages. Projects in Arabic, Vietnamese, and Zulu are already underway. In India, developers are working on the "Navarasa" project, aiming to optimize the model for 12 Indian languages, while another team is researching fine-tuning to support Korean dialects.

The launch of the Gemma2 series models aims to achieve higher performance with fewer parameters. Compared to similar models from companies like Meta, the Gemma2 performs equally well, and in some cases, the 2 billion parameter Gemma2 can even surpass models with 700 billion parameters, such as LLaMA-2.

Developers and researchers can access the Gemma-2-2B model and other Gemma models through free plans on Hugging Face, Google AI Studio, and Google Colab, and they can also find them in the Vertex AI Model Garden.

Official Website: https://aistudio.google.com/app/prompts/new_chat?model=gemma-2-2b-it

Hugging Face: https://huggingface.co/google

Google Colab: https://ai.google.dev/gemma/docs/keras_inference?hl=de

Key Points:

🌟 Google introduces a new Japanese Gemma AI model, rivaling GPT-3.5 in performance with only 2 billion parameters, ideal for mobile device operation.

🌍 Google launches the "Unlock Global Communication with Gemma" competition with $150,000 in prizes, encouraging the development of local language versions.

📈 The Gemma2 series models achieve high performance with fewer parameters, potentially surpassing larger models, enhancing developers' application potential.