Google is rapidly developing the next-generation AI large model, Gemini, leveraging its robust computational power reserves and infrastructure. Reports indicate that Google's new architecture, the multimodal large model Gemini, is iterating at an incredible speed. The latest iteration boasts a computational power of 1e26 FLOPS, which is five times the training computational power of GPT-4. Google's computational resources continue to grow rapidly, and Gemini is reportedly set to be released this fall. The abundant computational resources and efficient infrastructure will serve as powerful tools for Google in AI research and commercial applications, potentially becoming a significant watershed in the competition between Google and OpenAI.