OpenAI has announced its initiatives in the field of safety. OpenAI states that its Safety Systems team is at the forefront of ensuring the safety and reliability of artificial intelligence models in today's real-world applications. The Safety Systems team is dedicated to ensuring the safety, robustness, and reliability of AI models and their deployment in the real world. They address safety issues through practice and research, and develop fundamental solutions to ensure the safety and trustworthiness of AI. The safety team includes multiple groups such as safety engineering, model safety research, safety reasoning research, and human-computer interaction. OpenAI emphasizes their commitment to addressing AI safety issues, such as how to prevent models from providing unsafe or inappropriate answers, how to detect harmful answers or actions, and how to maintain user privacy while ensuring safety. They also highlight research centered on model behavior consistency and collaborate with human-machine cooperation and human experts to ensure that model behaviors align with human values.