Recently, Anna Makanju, the Global Affairs Vice President of OpenAI, shared her insights on AI bias at the United Nations "Future Summit."

She mentioned that models like OpenAI's o1, known as "reasoning" models, can significantly reduce bias in AI systems. So, how does o1 achieve this? Makanju explained that these models can self-identify bias in their responses and adhere more strictly to rules that avoid producing "harmful" answers.

OpenAI, Artificial Intelligence, AI

She stated that the o1 model spends more time evaluating its responses when processing questions, capable of self-checking: "It can say, 'This is how I approach this problem,' and then review its own answer, thinking 'Oh, there might be a flaw in the reasoning here.'" She even emphasized that o1 performs "almost perfectly" in analyzing its own biases and will continue to improve as technology advances.

However, this claim of "almost perfect" seems somewhat exaggerated. Internal tests at OpenAI have found that compared to "non-reasoning" models, including their own GPT-4o, o1 performs less ideally in some bias tests. On questions regarding race, gender, and age, o1 sometimes performs even worse than GPT-4o. Although o1 performs better in implicit discrimination, it stands out more in explicit discrimination, particularly in issues related to age and race.

Interestingly, the economic version of o1, o1-mini, performs even worse. Tests show that o1-mini has a higher probability of explicit discrimination on gender, race, and age than GPT-4o, and also shows more implicit discrimination on age issues.

Furthermore, current reasoning models have many limitations. OpenAI also acknowledges that o1 offers minimal benefits in certain tasks. Its response speed is slow, with some questions taking over 10 seconds to answer. Moreover, the cost of o1 is significant, running at 3 to 4 times the cost of GPT-4o.

If Makanju's claims about reasoning models are indeed the best path to fair AI, then they need improvement in areas beyond bias to become a viable alternative. If not, only those with substantial financial resources and willingness to endure various delays and performance issues will truly benefit.

Key Points:

🌟 OpenAI's o1 model is said to significantly reduce AI bias, but test results show it underperforms expectations.

💡 o1 outperforms GPT-4o in implicit discrimination but fares worse in explicit discrimination.

💰 The reasoning model o1 is costly, runs slowly, and still needs improvement in multiple aspects.