Recent reports have surfaced detailing internal tensions at OpenAI. Investor Peter Thiel reportedly warned Sam Altman about a clash between AI safety advocates and the company's commercial direction before Altman's dismissal in November 2023.

According to the Wall Street Journal, Thiel raised these concerns with Altman at a private dinner in Los Angeles. He specifically mentioned the influence of the Effective Altruism movement within OpenAI, suggesting Altman was unaware of the impact of AI researcher Eliezer Yudkowsky. "You don't understand how Eliezer has convinced half your company to believe this stuff," Thiel reportedly told Altman.

OpenAI, ChatGPT, Artificial Intelligence, AI

Altman reportedly dismissed these concerns, referencing Elon Musk's departure in 2018. He argued that Musk and Yudkowsky's warnings about the risks of AI hadn't hindered the company's progress. However, internal safety concerns became more pronounced in 2024, with several key AI safety figures leaving OpenAI, including Chief Scientist Ilya Sutskever and Jan Leike, who headed the "superalignment" team. Leike publicly criticized the company's safety strategy, claiming his team struggled to secure computing resources.

The Wall Street Journal report states that Chief Technology Officer Mira Murati and Sutskever spent months gathering evidence of Altman's management failings, documenting instances of him sowing discord among employees and circumventing safety protocols. During the launch of GPT-4-Turbo, Altman allegedly provided inaccurate information to the board and lacked transparency regarding the structure of his private OpenAI startup fund. The rushed safety testing following the May 2024 release of GPT-4o further fueled concerns.

When Murati attempted to directly address these issues with Altman, he brought HR representatives to her one-on-one meetings for several weeks, ultimately leading Murati to stop sharing her concerns with the board. Ultimately, four board members, including Sutskever, decided to remove Altman through a series of confidential video conferences. Sutskever prepared two detailed documents outlining Altman's misconduct. The board also dismissed Greg Brockman because Murati reported to the board that he had bypassed her to communicate directly with Altman.

Initially, the board's public explanation was brief, stating only that Altman had failed to be "consistently candid." However, facing the threat of a mass employee exodus, the situation quickly reversed. Altman and Brockman were reinstated after significant employee support, with Murati and Sutskever signing letters supporting their reinstatement, despite their key roles in the dismissal. Sutskever was reportedly surprised by the employee support for Altman, expecting employees to welcome the change.

The Wall Street Journal's reporting contradicts previous statements, particularly Murati's claim that she wasn't involved in Altman's removal. The systematic evidence gathering indicates she played a central role. Murati and Sutskever subsequently left to start their own AI company. This investigation further confirms the significant role of AI safety concerns in Altman's brief dismissal, revealing deep divisions within OpenAI's leadership and ultimately leading to significant changes in the organization's approach to AI safety and development.

Key takeaways:

🌐 Peter Thiel warned Altman at a private dinner about a conflict between AI safety and commercial directions within OpenAI.

🛠️ Key AI safety figures departed, with Murati and Sutskever gathering evidence of Altman's mismanagement.

🔄 Altman and Brockman were reinstated with employee support, prompting serious internal reflection on the tensions.