Gemini Embedding is an experimental text embedding model launched by Google, provided through the Gemini API. This model demonstrates outstanding performance in the Multilingual Text Embedding Benchmark (MTEB), surpassing previous top models. It can convert text into high-dimensional numerical vectors, capturing semantic and contextual information, and is widely used in scenarios such as retrieval, classification, and similarity detection. Gemini Embedding supports over 100 languages, features an 8K input token length and 3K output dimension, and incorporates Multi-Representation Learning (MRL) technology, allowing for flexible dimension adjustment to meet storage requirements. The model is currently in the experimental stage, and a stable version will be released in the future.