In the ever-changing landscape of AI, a subtle yet significant shift occurred unnoticed until recently: Anthropic, a prominent AI company, appears to be distancing itself from its past. The star enterprise has reportedly quietly removed several voluntary commitments jointly issued with the Biden administration in 2023 from its official website. These commitments, once considered strong evidence of Anthropic's embrace of AI safety and "trustworthy" AI, have seemingly vanished.
The Midas Project, an AI oversight organization, was the first to notice this change. According to the organization, commitments regarding sharing AI risk management information with the government and industry, and research on AI bias and discrimination, which were previously publicly displayed in Anthropic's transparency center, disappeared last week. Only commitments related to reducing AI-generated sexually abusive imagery remain.
Anthropic's actions have been remarkably low-key, even secretive. They haven't publicly announced the change, and have remained silent in response to media inquiries, fueling speculation about their motives.
In July 2023, Anthropic, along with tech giants like OpenAI, Google, Microsoft, Meta, and Inflection, publicly announced their commitment to a series of AI safety pledges in response to the Biden administration's call. This commitment list, brimming with good intentions, included rigorous internal and external safety testing before AI system release, significant investment in cybersecurity to protect sensitive AI data, and the development of watermarks for AI-generated content.
It's crucial to note that these commitments were not legally binding, and Anthropic had already adopted many of these practices. However, within the Biden administration's strategic framework, this voluntary agreement held deeper political significance – it served as a key signal of the administration's AI policy priorities before a more comprehensive AI executive order expected months later.
However, times have changed, and the political climate has shifted. The Trump administration has publicly stated that its approach to AI governance will differ significantly from its predecessor's.
In January, the Trump administration rescinded the Biden administration's AI executive order. This order aimed to guide the National Institute of Standards and Technology in developing industry guidelines to help companies identify and correct flaws in models, including bias. Critics close to the Trump administration argued that the Biden administration's order's burdensome reporting requirements forced companies to disclose trade secrets, which they found unacceptable.
Following this, Trump swiftly signed a new executive order instructing federal agencies to promote AI development "unburdened by ideological biases," aiming to foster "human flourishing, economic competitiveness, and national security." Interestingly, Trump's new order makes no mention of combating AI discrimination, a central tenet of the Biden administration's initiative.
As the Midas Project pointed out on social media, the Biden-era AI safety commitments never implied a time limit nor were linked to the current president's party affiliation. Even after the November election, several companies that signed the commitments publicly stated their continued adherence.
However, in the short months since the Trump administration took office, Anthropic is not the only tech company adjusting its public policy stance. OpenAI recently announced its embrace of "intellectual freedom," emphasizing an open approach regardless of how sensitive or controversial a topic might be, while ensuring their AI systems don't unduly censor specific viewpoints.
OpenAI also quietly removed a page from its website dedicated to its commitments to diversity, equity, and inclusion (DEI). These DEI initiatives had previously faced fierce criticism from the Trump administration, leading many companies to halt or significantly adjust their DEI programs.
Several of Trump's Silicon Valley AI advisors, including Marc Andreessen, David Sacks, and Elon Musk, have publicly accused tech giants like Google and OpenAI of implementing "AI censorship" by artificially limiting the responses of their AI chatbots. However, multiple labs, including OpenAI, deny that their policy adjustments are directly influenced by political pressure.
It's important to note that OpenAI and Anthropic, leading players in the AI field, are actively competing for government contracts.
Amidst the growing controversy, Anthropic finally broke its silence, releasing a statement to address the concerns:
"We remain firmly committed to the voluntary AI commitments established by the Biden administration. The progress and specific actions of these commitments continue to be reflected in the content of [our] transparency center. To avoid further confusion, we will add a dedicated section directly referencing our progress."
However, will this belated statement fully alleviate concerns? Was Anthropic's "withdrawal of commitments" a misunderstanding, or a calculated strategic shift? In the ever-shifting landscape of political and commercial maneuvering, every move by AI giants deserves close attention.