Recently, OpenAI released their latest GPT-4o system card, a comprehensive research document detailing the company's safety measures and risk assessments undertaken before the launch of the new model.

The GPT-4o model officially went live in May of this year. Prior to its release, OpenAI enlisted an external team of security experts to conduct risk assessments, a common practice known as "red team" testing. Their focus was primarily on the potential risks posed by the model, such as the generation of unauthorized voice clones, obscene and violent content, or repeated copyrighted audio segments.

GPT-4o ChatGPT

According to OpenAI's own framework, researchers have assessed the overall risk of GPT-4o as "moderate". This risk level is determined based on the highest risk rating among four main categories: cybersecurity, bio-threats, persuasiveness, and model autonomy. Except for persuasiveness, all other categories are considered low risk. Researchers found that some texts generated by GPT-4o were more persuasive in influencing reader opinions than those written by humans, although not overall more persuasive.

OpenAI spokesperson Lindsay McCallum Rémy stated that the system card includes readiness assessments created jointly by internal teams and external testers, including the Model Evaluation and Threat Research (METR) and Apollo Research listed on OpenAI's website, who focus on the evaluation of AI systems. This is not the first time OpenAI has released a system card; previous models like GPT-4, GPT-4 visual edition, and DALL-E3 underwent similar tests and published related research results.

However, the release of this system card comes at a critical juncture, as OpenAI faces ongoing criticism from both internal staff and state senators questioning its safety standards. Just minutes before the GPT-4o system card was released, Massachusetts Senator Elizabeth Warren and Representative Lori Trahan co-signed an open letter urging OpenAI to provide answers on how it handles whistleblowers and safety reviews. The letter mentioned several safety issues, including CEO Sam Altman's brief dismissal in 2023 due to concerns from the board, and the departure of a security executive who claimed that "safety culture and processes are suppressed by beautiful products."

Moreover, OpenAI's release of a powerful multimodal model right before the U.S. presidential election clearly presents potential risks of misinformation or exploitation by malicious actors. Although OpenAI aims to prevent misuse through practical scenario testing, public demands for transparency are growing. Particularly in California, State Senator Scott Wiener is pushing a bill to regulate the use of large language models, including requiring companies to be held legally accountable when their AI is used for harmful purposes. If the bill passes, OpenAI's cutting-edge models must undergo risk assessments in accordance with state laws before being released to the public.

Key Points:

🌟 OpenAI's GPT-4o model is assessed as "moderate" risk, primarily concerning cybersecurity and persuasiveness.

🔍 The release of the system card coincides with a critical moment when OpenAI faces external scrutiny over its safety standards, with calls for transparency increasing.

🗳️ The timing of the release is sensitive, occurring just before the U.S. presidential election, posing risks of misinformation and potential exploitation by malicious actors.