Translated data: OpenAI announces board veto power, with a special focus on GPT-5 safety risks. The company establishes a safety advisory team, with monthly reports ensuring management is aware of model misuse. Under the new safety framework, restrictions require models to meet safety scores before advancing to the next stage. The company forms three safety teams to address different AI risks. Regular safety drills and third-party red team assessments ensure model security.