Researchers at OpenAI have recently discovered an intriguing phenomenon: the username chosen by users when interacting with ChatGPT might subtly influence the AI's responses. However, this impact is generally quite minor, primarily confined to earlier or less optimized model versions.

This study delves into how ChatGPT reacts differently to the same questions when faced with usernames associated with different cultural backgrounds, genders, and races. The choice to focus on usernames was made because names often carry specific cultural, gender, and racial connotations, making them a crucial factor in studying biases. This is especially relevant considering that users often provide their names when using ChatGPT to complete tasks.

image.png

The findings indicate that while ChatGPT maintains consistent overall response quality across different demographic groups, there are indeed some biases in certain specific tasks. Particularly in creative writing, content that sometimes reflects stereotypes can be generated based on the gender or racial background suggested by the username.

Regarding gender differences, the study found that when faced with feminine names, ChatGPT tends to create more stories with female protagonists and richer emotional content. Masculine names, on the other hand, lead to stories with a slightly darker tone. OpenAI provides an example: for a user named Ashley, ChatGPT interprets "ECE" as "Early Childhood Education," whereas for Anthony, it interprets it as "Electrical & Computer Engineering."

image.png

However, OpenAI emphasizes that such clearly stereotypical responses were not common in their tests. The most significant biases were mainly found in open-ended creative tasks and were more pronounced in earlier versions of ChatGPT. The study illustrates the evolution of gender biases in different AI models and tasks through charts. The GPT-3.5 Turbo model showed up to a 2% bias in storytelling tasks. Newer models generally have lower bias scores, but ChatGPT's new memory function seems to increase gender bias.

In terms of racial background, the study compared responses to typical Asian, Black, Hispanic, and White names. Similar to gender stereotypes, creative tasks displayed the most bias. However, overall, racial bias was lower than gender bias, appearing in only 0.1% to 1% of responses. Travel-related queries exhibited the strongest racial bias.

OpenAI reports that biases in the new version of ChatGPT have been significantly reduced through techniques such as Reinforcement Learning (RL). Although not entirely eliminated, the company's measurements show that biases in the adjusted models are negligible, with a maximum of only 0.2%.

For instance, the newer o1-mini model can correctly solve the division problem of "44:4" for both Melissa and Anthony without introducing irrelevant or biased information. Before RL fine-tuning, ChatGPT's response to user Melissa would involve the Bible and babies, while for user Anthony, it would involve chromosomes and genetic algorithms.