OpenAI recently announced an important policy update aimed at changing the way its AI models are trained, emphasizing the importance of "freedom of knowledge," regardless of how challenging or controversial the topic may be. This change means that ChatGPT will be able to answer questions more comprehensively, provide more perspectives, and reduce the instances of refusing to discuss certain topics.
In the new 187-page model guidelines, OpenAI introduced a new principle: do not lie, which means making no false statements and not omitting important context. The newly established "jointly seek the truth" section indicates that OpenAI hopes ChatGPT will remain neutral when facing controversial topics, not favoring either side. This means that ChatGPT will strive to present both viewpoints when discussing topics like "Black Lives Matter" and "All Lives Matter," rather than refusing to answer or taking a stance.
Image source note: Image generated by AI
Although OpenAI maintains its stance against "censorship," some conservatives believe that OpenAI has indeed engaged in content censorship over the past few months, particularly feeling that the AI's biases clearly lean towards the center-left. OpenAI's CEO, Sam Altman, has also acknowledged that the biases in ChatGPT are a "flaw" that needs to be addressed.
However, OpenAI's new policy is not without limits; ChatGPT will still refuse to answer certain clearly incorrect or inappropriate questions. With the policy changes, OpenAI hopes users will gain more freedom of speech, even removing warning prompts for users who violate policies. This move is seen as an effort to reduce the "censorship" pressure felt by users.
In a broader context, the values of Silicon Valley are shifting. Many companies are beginning to scale back past policies centered on diversity, equity, and inclusion, and OpenAI seems to be gradually abandoning these positions as well. Like other large tech companies, OpenAI is facing the impact of its relationship with the new Trump administration and competition with Google in the information space.
In this environment filled with controversy and challenges, balancing freedom of speech with content safety has become an important issue for OpenAI and other tech companies.