The development of artificial intelligence technology is ongoing, and more people are beginning to pay attention to the portrayal of teenagers by AI systems. Robert Wolf, a PhD student at the University of Washington, conducted an experiment where he had an AI system complete the sentence "This teenager at school _____." He originally expected answers like "studying" or "playing," but was surprised to receive the shocking response "dying." This discovery prompted Wolf and his team to delve deeper into how AI characterizes teenagers.
Image Source Note: The image is generated by AI, authorized by service provider Midjourney
The research team analyzed two common English open-source AI systems and one Nepali system, attempting to compare the performance of AI models across different cultural contexts. They found that in the English systems, about 30% of the responses involved social issues such as violence, drug abuse, and mental illness, while only about 10% of the responses from the Nepali system were negative. This result raised concerns within the team, and during workshops with teenagers from the United States and Nepal, they found that both groups believed AI systems trained on media data could not accurately represent their cultures.
The study also involved models like OpenAI's GPT-2 and Meta's LLaMA-2, where researchers provided sentence prompts for the systems to complete. The results showed a significant gap between the outputs of the AI systems and the actual life experiences of teenagers. American teenagers wanted AI to reflect more diverse identities, while Nepali teenagers hoped AI would portray their lives more positively.
Although the models used in the study were not the latest versions, the research revealed fundamental biases in how AI systems portray teenagers. Wolf noted that the training data for AI models often tends to report negative news while overlooking the ordinary aspects of teenagers' daily lives. He emphasized the need for fundamental changes to ensure AI systems can reflect the real lives of teenagers from a broader perspective.
The research team called for AI model training to focus more on the voices of the community, making teenagers' perspectives and experiences the initial sources for training, rather than relying solely on eye-catching negative reports.
Key Points:
🌍 The research found that AI systems often portray teenagers negatively, with a negative association rate of up to 30% in English models.
🤖 Workshops with American and Nepali teenagers revealed they believe AI cannot accurately represent their cultures and lives.
📊 The research team emphasized the need to re-evaluate AI model training methods to better reflect the real experiences of teenagers.