According to an analysis by The Information, based on internal financial data, OpenAI's losses this year could reach as high as $5 billion. Its competitor, Anthropic, is also facing significant losses in the billions.

The Information reports that OpenAI's costs for training AI models and running inference systems could amount to $7 billion, especially with the integration of ChatGPT by Apple, which is expected to further increase inference costs. Additionally, personnel costs could reach up to $1.5 billion.

AI Investment Robot

Image Source Note: The image was generated by AI, authorized by Midjourney

OpenAI's expenditure on renting Microsoft servers alone is close to $4 billion, despite receiving a discount on computational power (at $1.30 per hour per Nvidia A100 chip). This supports the view that Microsoft's interest in AI investment is primarily related to the growth of its Azure cloud platform. In contrast, Microsoft's own AI products (such as Copilot or Bing integration) have not performed as well.

OpenAI's AI training costs, including data payments, could rise to $3 billion this year. The company currently employs about 1,500 people and plans to expand further. According to The Information, personnel expenses could reach $1.5 billion by the end of the year. OpenAI's operating costs this year could reach $8.5 billion, while revenue is expected to be between $3.5 billion and $4.5 billion, depending on sales in the second half of the year.

In comparison, Anthropic's situation is worse, despite being smaller in scale. Sources familiar with the data indicate that Anthropic expects its expenses to exceed $2.7 billion this year, while revenue is only one-fifth to one-tenth of OpenAI's. The startup's estimated costs for computing alone are $2.5 billion.

By the end of the year, Anthropic anticipates an annualized revenue of about $800 million, or $67 million per month. However, Anthropic must share this revenue with Amazon.

With Meta participating in open-source models and smaller companies like Mistral and Cohere rising in Europe or certain niche markets (such as B2B data chat), the high development and operating costs of AI models face fierce competition. Companies struggle to measure the value of generative AI in their processes when implementing chatbot systems as "general-purpose technology," especially when there is no clear use case for all employees, such as with Microsoft's Copilot or OpenAI's ChatGPT Enterprise Edition.

Preliminary doubts are emerging about the economic viability of the current AI market. This does not negate the overall value of generative AI, but questions whether the investment is proportional to the returns.

Potential growth areas include OpenAI's new product, SearchGPT, but replicating the success of ChatGPT is uncertain. Competitive products like Google's Gemini subscription service have failed to make a significant impact. ChatGPT might stand alone.

More versatile multi-modal models could create new use cases, leading to new applications, increased usage, and higher revenue. If efficiency is also improved, profit margins could eventually improve. However, there are still many questions about the ultimate quality and cost of generating multi-modal content, such as video.

To reach the next level, the AI market may need to make significant breakthroughs in expanding general reasoning capabilities. This would open up new automation and business opportunities and could address fundamental issues with current AI systems, such as generating nonsense.

For OpenAI's CEO, Sam Altman, and others, this could be the ultimate bet, which explains why large companies continue to invest billions of dollars in R&D. As Google CEO Sundar Pichai said during the recent earnings call: "The risk of underinvestment here is far greater than the risk of overinvestment."

Key Points:

1. 💸 **OpenAI could lose up to $5 billion this year, with Anthropic facing billions in losses**

2. 🔍 **Only a few companies have established governance systems for generative AI tools, with most decisions being ad hoc**

3. 🤖 **Multi-modal models may bring new applications and revenue growth, but efficiency and quality are still in question**