OpenAI has released a new experimental model: gpt-4o-64k-output-alpha. The standout feature of this new model is its ability to output 64K tokens in one go, meaning it can generate richer and more detailed content in a single request, but at a higher API cost.

image.png

Alpha participants can access the GPT-4o long output capabilities using the model name "gpt-4o-64k-output-alpha". This model not only meets the needs of users for longer texts, whether for writing, programming, or complex data analysis, but also provides more comprehensive and detailed support.

In terms of pricing, using a model with longer output also means an increase in corresponding costs. OpenAI has clearly stated that the cost of generating long texts is higher, with a price of $18 per million output tokens. In comparison, the cost for input tokens is $6 per million. This pricing strategy is designed to match the higher computational costs and also encourages users to use this powerful tool judiciously.

image.png

Key Points:

📈 OpenAI's GPT-4o model supports up to 64K output, suitable for users needing detailed content.

💰 The cost of generating long texts is higher, with a charge of $18 per million output tokens.

📝 This model aims to provide new possibilities for fields such as creation and research, promoting deeper communication and creation.