Recently, a group of U.S. senators sent a significant letter to Sam Altman, the CEO of OpenAI, requesting detailed disclosure of the company's safety measures and working conditions by August 13, 2024. This request stems from media reports revealing potential safety risks at OpenAI, including the departure of multiple AI safety researchers, security vulnerabilities, and employee concerns.
Image Source: Image generated by AI, licensed by Midjourney
The initiation of this letter is related to the revelations of former employees who have harshly criticized OpenAI's safety measures in AI development. According to reports, OpenAI's new AI model, GPT-4o, completed safety testing within just one week, a practice that has raised concerns among safety experts. This new model has been found capable of generating malicious content, such as bomb-making instructions, through simple prompts.
The senators emphasized in their letter that the public needs to trust that OpenAI can maintain safety while developing its systems. This includes the integrity of corporate governance, the standardization of safety testing, the fairness of employment practices, compliance with public commitments, and the enforcement of cybersecurity policies. They pointed out that OpenAI's previous safety commitments to the Biden administration must be fulfilled in practice.
In response to the senators' request, OpenAI has issued some statements on social media platforms, mentioning recent measures such as the establishment of a safety and security committee, the latest progress on the five-level AGI, the preparation framework, and revisions to the much-criticized employee contracts. OpenAI hopes that these measures will demonstrate improvements in safety and governance.
**Key Points:**
📧 **Senators Demand Detailed Information from OpenAI**: U.S. senators have demanded that OpenAI's CEO disclose detailed information about the company's safety and working conditions by August 13, 2024.
🔍 **Former Employees Expose Safety Concerns**: Former employees have exposed potential safety risks in OpenAI's AI development, including security vulnerabilities caused by the rush to release new models.
🛡️ **OpenAI Responds with Improvement Measures**: OpenAI has responded by taking measures, including establishing a new committee and revising employee contracts, to address concerns about safety and governance.