The Arc Prize Foundation last week drastically revised its cost estimate for OpenAI's upcoming "reasoning" AI model, o3, increasing it from an initial $3,000 per ARC-AGI task to a staggering $30,000. This price correction highlights that the operational costs of today's most complex AI models may be ten times higher than previously anticipated.

OpenAI, ChatGPT, Artificial Intelligence, AI

While OpenAI hasn't released official pricing for o3, or even officially launched the model, the Arc Prize Foundation believes that using OpenAI's currently most expensive model, o1-pro, as a reference is more reasonable. Mike Knoop, co-founder of the Arc Prize Foundation, stated: "We believe o1-pro is a closer approximation to the true cost of o3...because the compute used during testing was very large." Due to the inherent uncertainties, the foundation has marked o3 as "preview" status on its leaderboard.

This high cost is closely related to the model's computational demands. According to the Arc Prize Foundation, o3high (the highest computational configuration of o3) requires 172 times more computation than o3low (the lowest configuration) to solve ARC-AGI problems. Furthermore, industry rumors suggest OpenAI is considering high-priced plans for enterprise clients; The Information reported the company may charge up to $20,000 per month for specialized AI "agents".

While some argue that even the most expensive AI models are still cheaper than hiring human professionals, AI researcher Toby Ord points out potential efficiency issues – for example, o3high needs to try 1,024 times on each ARC-AGI task to achieve optimal results.