Wharton professor Ethan Mollick recently revealed some information about Anthropic. He stated that Anthropic's PR department clarified that the training cost of their flagship AI model, Claude 3.7 Sonnet, was "tens of millions of dollars," and that the computational power used was less than 10^26 FLOPs. He also mentioned that Anthropic informed him that Sonnet 3.7 wouldn't be considered a 10^26 FLOP model, but that future models would be significantly larger. TechCrunch reached out to Anthropic for confirmation but hadn't received a response at the time of publication.

Claude

Previously, Anthropic CEO Dario Amodei revealed that the training cost for Claude 3.5 (Sonnet's predecessor), released in the fall of 2024, was also tens of millions of dollars. Compared to top models in 2023, this cost is quite favorable. For example, OpenAI spent over $100 million developing GPT-4, and Stanford University research estimated that Google spent nearly $200 million training its Gemini Ultra model.

However, Amodei predicts that future AI models will cost billions of dollars, and this doesn't include safety testing and fundamental research. Also, as the AI industry adopts "reasoning" models capable of solving problems over extended periods, the computational cost of running these models is likely to continue rising.