Recently, engineers at Meta have expressed their concerns on the anonymous social platform TeamBlind, revealing the immense pressure brought by the AI model R1 developed by the Chinese company DeepSeek. DeepSeek is considered the world's first open-source AI model that can be compared to OpenAI's model o1. In contrast to OpenAI, R1 not only has a higher level of open-source accessibility but also has an astonishingly low training cost of just $5.5 million.
In comparison, the annual salaries of Meta's executives often exceed the entire training cost of DeepSeek V3, which has made the management at Meta feel quite embarrassed.
According to internal sources, the emergence of DeepSeek V3 last year has put significant pressure on Meta. Meta's engineers are racing against time to analyze DeepSeek's technology, hoping to replicate its key technologies as soon as possible. In the past, AI researchers worldwide were chasing after large models from the United States, but now the situation has changed, and American engineers are starting to reverse-engineer China's AI technologies.
DeepSeek has not only surpassed Meta's Llama4 in benchmark tests but has also drawn attention in various fields with its impressive performance. DeepSeek achieved extremely high inference performance through large-scale reinforcement learning (RL) and unsupervised fine-tuning (SFT) techniques. This technological confidence has led some American netizens to reflect on China's rapid rise in the AI sector.
Key Points:
🌟 The training cost of DeepSeek's R1 model is only $5.5 million, with performance comparable to OpenAI's o1.
👨💻 Meta's executives earn more than the training cost of DeepSeek, putting immense pressure on the management.
📈 DeepSeek's success has sparked panic among American tech companies, challenging the United States' dominance in the AI field.