Recently, Sam Altman, CEO of OpenAI, revealed that OpenAI is collaborating with the U.S. AI Safety Institute, a federal government agency aimed at evaluating and addressing risks associated with AI platforms. OpenAI will provide early access to its next major generative AI model for safety testing.

OpenAI, ChatGPT, Artificial Intelligence, AI

Altman announced this on Thursday evening on X, providing few details. However, this move, along with a similar agreement reached with a UK AI safety agency in June, seems intended to counter claims that OpenAI is prioritizing the development of more powerful generative AI technologies over AI safety.

Previously, OpenAI had effectively disbanded a department in May dedicated to developing control measures to prevent "superintelligent" AI systems from going rogue. Reports indicated that OpenAI abandoned the team's safety research in favor of launching new products, leading to the resignation of the team's two co-heads, Jan Leike (now leading safety research at AI startup Anthropic) and OpenAI co-founder Ilya Sutskever (who started his own safety-focused AI company, Safe Superintelligence Inc.).

Facing increasing criticism, OpenAI announced it would remove non-disparagement clauses that restrict employee whistleblowing, establish a safety committee, and allocate 20% of its computing resources to safety research. (The disbanded safety team had pledged to receive 20% of OpenAI's computing resources for their work, but ultimately did not receive them.) Altman reiterated the commitment to allocate 20% of computing resources and confirmed that OpenAI had abolished non-disparagement clauses for new and existing employees in May.

However, these measures have not quelled skepticism from some observers, especially regarding OpenAI's entirely internal composition of the safety committee and the recent reassignment of a top AI safety executive to another department.

Five senators, including Brian Schatz of Hawaii, have recently questioned OpenAI's policies in a letter to Altman. OpenAI's Chief Strategy Officer, Jason Kwon, responded today, stating that OpenAI is "committed to implementing rigorous safety protocols at every stage of our process."

Given OpenAI's support earlier this week for the "Innovation Future Act," the timing of its agreement with the U.S. AI Safety Institute seems somewhat suspicious. If passed, the act would authorize the institute as an administrative agency to set standards and guidelines for AI models. These actions could be seen as an attempt to control regulation or, at least, exert influence on federal AI policy-making by OpenAI.

It is worth noting that Altman is a member of the U.S. Department of Homeland Security's AI Safety and Security Committee, which advises on the "safe and reliable development and deployment of AI" in critical U.S. infrastructure. Additionally, OpenAI has significantly increased its federal lobbying expenditures this year, spending $800,000 in the first six months of 2024, compared to just $260,000 for the entire year of 2023.

The U.S. AI Safety Institute is affiliated with the Department of Commerce's National Institute of Standards and Technology and collaborates with a consortium of companies including Anthropic, as well as major tech companies like Google, Microsoft, Meta, Apple, Amazon, and NVIDIA.

Key Points:

🎯 OpenAI promises early access to its next model for the U.S. AI Safety Institute.

🎯 OpenAI disbanded a relevant safety department, sparking controversy.

🎯 The timing of OpenAI's cooperation with U.S. agencies raises suspicions, with increased spending on federal lobbying.