According to the latest report from The Wall Street Journal, OpenAI's highly anticipated next-generation artificial intelligence model, GPT-5 (codenamed Orion), is facing significant challenges and its development progress is not meeting expectations.

The report reveals that during the 18-month development period, OpenAI has completed at least two rounds of large-scale training. However, the initial training speed was lower than expected, resulting in subsequent large-scale training being both time-consuming and costly. Although GPT-5 shows improvements over its predecessor, the current progress is not sufficient to justify its enormous operational costs.

OpenAI Strawberry Project GPT-5

To advance the project, OpenAI has adopted a multi-faceted data acquisition strategy. In addition to utilizing public data and licensed content, the company has also hired personnel specifically to create new data, including writing code and solving mathematical problems. At the same time, OpenAI is using another model, o1, to generate synthetic data.

This news aligns with a previous report from the tech media outlet The Information, which indicated that GPT-5 may not achieve significant breakthroughs like past models, prompting OpenAI to begin seeking new development strategies. OpenAI has not yet responded to this but has confirmed that the model codenamed Orion will not be released within this year.