OpenAI recently announced the detailed reasoning process of its latest inference model, o3-mini, which is seen as a response to the growing pressure from competitor DeepSeek-R1. This change marks a significant shift in OpenAI's strategy regarding model transparency.

Previously, OpenAI considered the "Chain of Thought" (CoT) as a core competitive advantage and chose to keep it hidden. However, as open models like DeepSeek-R1 fully showcase their reasoning trajectories, this closed approach has become a weakness for OpenAI. Although the new o3-mini is still not fully open with its original tags, it provides a clearer display of the reasoning process.

QQ20250208-093702.png

In terms of performance and cost, OpenAI is also actively catching up. The pricing for o3-mini has been reduced to $4.40 per million output tokens, significantly lower than the earlier o1 model's $60, and closer to DeepSeek-R1's price range of $7-8 offered by U.S. providers. Additionally, o3-mini outperforms its predecessors in several reasoning benchmark tests.

QQ20250208-093712.png

Practical tests have shown that the detailed reasoning display of o3-mini indeed enhances the model's usability. When dealing with unstructured data, users can better understand the model's reasoning logic, allowing them to optimize prompts for more accurate results.

OpenAI CEO Sam Altman recently acknowledged that he had "stood on the wrong side of history" regarding the open-source debate. With DeepSeek-R1 being adopted and improved by multiple organizations, OpenAI's future adjustments in its open-source strategy are worth watching.