Recently, AI company Anthropic updated its "responsible scaling" policy, clarifying which models require additional safety precautions. This move aims to effectively mitigate potential risks before new technologies are launched.
According to Anthropic's blog post, if stress testing reveals that an AI model could assist a "resource-constrained state actor" in developing chemical or biological weapons, Anthropic will implement new safety measures before its release. Furthermore, the company will strengthen security if the model can fully automate the work of junior researchers or create excessive acceleration effects during rapid technological expansion.
Anthropic recently secured a new funding round, achieving a $61.5 billion valuation, making it one of the highest-valued AI startups. However, this valuation is still relatively small compared to OpenAI, which recently announced a $40 billion funding round, bringing its total valuation to $300 billion. With the generative AI market projected to exceed $1 trillion in the next decade, competition is intensifying, with tech giants like Google, Amazon, and Microsoft actively launching new products and features. Competition from China is also emerging, as evidenced by the rapid rise in popularity of DeepSeek's AI model in the United States.
In its "responsible scaling" policy released last October, Anthropic also mentioned conducting physical office security checks to detect potential surveillance devices. The company confirmed the establishment of an executive risk committee and an internal security team. Additionally, they will introduce "physical" security processes, utilizing advanced detection equipment and technology to monitor for surveillance devices and ensure internal security.
Anthropic's ongoing efforts in AI safety and responsibility are not only to protect its own technology but also to safeguard the healthy development of the entire industry.