Frontier Safety Framework
An AI safety framework introduced by DeepMind designed to identify and mitigate potential future risks posed by advanced AI models.
CommonProductProgrammingAI SafetyRisk Assessment
Frontier Safety Framework is a set of protocols proposed by Google DeepMind to proactively identify potential situations where future AI capabilities could lead to severe harm and establish mechanisms to detect and mitigate these risks. The framework focuses on model-level powerful capabilities, such as exceptional agency or sophisticated networking capabilities. It is intended to complement our alignment research, which aims to train models to act according to human values and societal goals, as well as Google's existing AI responsibility and safety practices.
Frontier Safety Framework Visit Over Time
Monthly Visits
1669178
Bounce Rate
60.37%
Page per Visit
1.7
Visit Duration
00:00:59