Has the development of artificial intelligence hit a bottleneck? Jack Clark, co-founder of Anthropic, recently stated in a newsletter that this is not the case. He believes that OpenAI's recently released o3 model indicates that the development of AI is not only not slowing down, but may actually be accelerating.
In the newsletter titled "Import AI," Clark refuted claims that AI development is reaching its limits. He wrote, "Anyone telling you that progress is slowing or that scaling is hitting a bottleneck is wrong." He pointed out that OpenAI's new o3 model demonstrates that there is still significant room for growth in AI, but it requires a different approach. The o3 model is not just about scaling up the model, but utilizing reinforcement learning and additional computational power during runtime.
Image source note: Image generated by AI, image licensed by service provider Midjourney
Clark believes that this ability to "think aloud" during runtime opens up entirely new possibilities for expansion. He expects this trend to accelerate by 2025, when major companies will begin to combine traditional methods (like larger foundational models) with new computational approaches during training and inference. This aligns with OpenAI's initial statements when it first introduced its o model series.
Clark warned that most people may not anticipate how quickly AI development will progress. "I think basically no one realizes how significant future AI advancements will be."
However, he noted that the cost of computation is a major challenge. The cutting-edge version of o3 requires 170 times the computational power of its basic version, which already requires more resources than o1, and o1 itself requires more resources than GPT-4o.
Clark explained that these new systems make cost predictions more difficult. In the past, costs were straightforward, primarily dependent on model size and output length. But for o3, resource requirements may vary depending on the specific task.
Despite these challenges, Clark remains confident that combining traditional scaling methods with new approaches will lead to "more significant" AI advancements by 2025 than what has been achieved so far.
Clark's predictions have sparked interest in Anthropic's own plans. The company has yet to release a "reasoning" or "test-time" model that can compete with OpenAI's o series or Google's Gemini Flash Thinking.
The previously announced flagship model Opus3.5 is still on hold, reportedly due to its performance improvements not justifying the operational costs. While some believe this indicates broader challenges in scaling large language models, Opus3.5 is not a complete failure. It is said to have helped train the new Sonnet3.5, which has become one of the most popular language models on the market.