The non-profit research group Machine Intelligence Research Institute (MIRI) is calling for a global halt to basic or "frontier" model research, expressing concerns that safety issues could threaten human survival. Basic models are AI systems capable of being applied to multiple modalities. MIRI believes that these models will become smarter than humans and could potentially "destroy humanity."

Brain Large Model

Image source note: The image was generated by AI, provided by the image licensing service Midjourney

In the tech sector, some leading figures including Elon Musk and Steve Wozniak have previously called for a pause on developing basic models more powerful than OpenAI's GPT-4. However, MIRI wants to go further, with its recently unveiled communication strategy calling for a complete halt to attempts to build any system smarter than humans.

The group states: "Policymakers primarily address issues through compromises: they give in somewhere to gain an advantage elsewhere. We are concerned that most legislation aimed at preserving human survival will be passed through the usual political process and be ground down into ineffective compromises. Meanwhile, the clock is ticking. AI labs continue to invest in developing and training more powerful systems. It seems we are not close to obtaining the comprehensive legislation we need."

MIRI hopes that governments will force companies developing basic models to install "kill switches" so that AI systems can be shut down if they develop malicious or "x-risk" tendencies.

The non-profit organization says it remains committed to the idea of intelligent systems surpassing humans but wants to build them only after "we know how to safely construct such AI."

MIRI was founded by Eliezer Yudkowsky in 2000, with supporters including Peter Thiel and Vitalik Buterin, co-founder of the Ethereum cryptocurrency. The Future of Life Institute is also one of MIRI's main contributors.

Bradley Shimmin, chief analyst at AI and data analytics research firm Omdia, says that due to a lack of supportive research, MIRI will have a hard time convincing lawmakers. He says: "The market has considered these issues and concluded that the current and near-future state of the art for transformer-based GenAI models can do little more than create useful representations of complex subjects." Shimmin notes that MIRI correctly identifies the knowledge gap between building and regulating AI.

Key points:

- 📣 The non-profit research group Machine Intelligence Research Institute (MIRI) calls for a global stop to basic or "frontier" model research, fearing safety issues could threaten human survival.

- 🤖 MIRI hopes that governments will force companies developing basic models to install "kill switches" to shut down AI systems if they develop malicious or "x-risk" tendencies.

- 🌐 Bradley Shimmin, chief analyst at AI and data analytics research firm Omdia, says that due to a lack of supportive research, MIRI will have a hard time convincing lawmakers.