OpenAI has released a safety framework for ChatGPT, designed to address the potential severe risks posed by AI. This framework uses a risk scorecard to measure and track possible hazards, employing a team of experts to monitor the technology and alert to dangers. OpenAI also recruits national security experts to tackle major risks and allows independent third parties to test its technology. This move is aimed at ensuring the safe use of AI. As artificial intelligence becomes more widespread, the importance of cooperation and coordination in safety technology is increasingly highlighted.