OpenAI CEO Sam Altman recently announced on X (formerly Twitter) that the release of the new GPT-4.5 model will be phased due to a depletion of GPU resources. Altman stated that GPT-4.5 is massive and expensive, requiring "tens of thousands" of GPUs to support more ChatGPT users accessing the model.
GPT-4.5 will initially be available to ChatGPT Pro subscribers, starting this Thursday. ChatGPT Plus users will gain access the following week. This phased rollout aims to address the current GPU resource constraints and ensure a gradual user experience with this advanced AI model.
Altman also revealed that the complexity of GPT-4.5 makes it incredibly expensive to use. OpenAI will charge $75 per million input tokens (approximately 750,000 words) and $150 per million output tokens. These costs are 30 times and 15 times higher, respectively, than the input and output costs of OpenAI's previous flagship product, GPT-4o. This pricing has sparked significant user attention and discussion, with many deeming the costs excessive.
Altman stated in his announcement: "We've grown very fast and run out of GPU resources. We plan to add tens of thousands of GPUs next week and then roll it out to Plus tier users. This isn't the operational model we want, but it's hard to accurately predict the growth spurts that lead to GPU shortages." He also mentioned that OpenAI has been facing insufficient computing power and plans to address this challenge by developing its own AI chips and building large data centers.
Key Highlights:
🌐 OpenAI CEO Sam Altman revealed that the release of GPT-4.5 will be phased due to depleted GPU resources.
💰 GPT-4.5 is extremely expensive to use, with input and output costs being 30 and 15 times higher than GPT-4o, respectively.
🔧 OpenAI plans to address its insufficient computing power by developing its own AI chips and constructing large data centers.