With the rapid development of generative AI, the traditional belief that "bigger is better" is undergoing a transformation. Several top AI scientists recently stated that the method of simply increasing data volume and computational power to enhance AI performance has reached a bottleneck, and new technological breakthrough directions are emerging.

Ilya Sutskever, co-founder of Safe Superintelligence and OpenAI, recently expressed the view that traditional pre-training methods have reached a performance plateau. This assertion is particularly noteworthy because it was his early advocacy for large-scale pre-training methods that gave rise to ChatGPT. Now, he indicates that the AI field has transitioned from the "era of scale expansion" to the "era of miracles and discoveries."

OpenAI, ChatGPT, Artificial Intelligence, AI

Current large-scale model training faces multiple challenges: training costs often amounting to tens of millions of dollars, hardware failure risks due to system complexity, lengthy testing cycles, and limitations in data resources and energy supply. These issues are prompting researchers to explore new technological paths.

Among these, "test-time compute" technology has garnered widespread attention. This method allows AI models to generate and evaluate multiple solutions in real-time during use, rather than providing a single answer. OpenAI researcher Noam Brown offers an illustrative analogy: letting an AI think for 20 seconds in a game of poker is equivalent to expanding the model size and training time by 100,000 times.

Currently, leading AI labs including OpenAI, Anthropic, xAI, and DeepMind are actively developing their own versions of this technology. OpenAI has already applied this technology in its latest model "o1," and Chief Product Officer Kevin Weil says that through these innovative methods, they see significant opportunities to enhance model performance.

Industry experts believe that this shift in technological approach could reshape the competitive landscape of the entire AI industry and fundamentally change the resource demand structure of AI companies. This marks the entry of AI development into a new phase that emphasizes quality improvement rather than mere scale expansion.