Google recently launched a brand new family of AI models called PaliGemma2, which claims to be able to "recognize" human emotions through image analysis. This claim has quickly sparked widespread discussion and serious skepticism among academics and technology ethics experts.

This AI system, based on the Gemma open model, can generate detailed image descriptions that go beyond simple object recognition and attempt to describe the behaviors and emotions of people in the images. However, several authoritative experts have raised serious warnings about the scientific validity and potential risks of this technology.

Professor Sandra Wachter from the Oxford Internet Institute bluntly stated that trying to "read" human emotions through AI is akin to "asking a magic eight ball for advice." This metaphor vividly reveals the absurdity of emotion recognition technology.

image.png

In fact, the scientific foundation of emotion recognition is extremely fragile. The theory of six basic emotions proposed by psychologist Paul Ekman has been widely questioned by subsequent research. There are significant differences in how people from different cultural backgrounds express emotions, making universal emotion recognition nearly an impossible task.

AI researcher Mike Cook from Queen Mary University expressed even more directly that emotion detection is impossible in a general sense. Although humans often believe they can judge others' emotions by observation, this ability is far more complex and unreliable than it seems.

Even more concerning is that such AI systems often have serious biases. Multiple studies have shown that facial analysis models may make different emotional judgments about people of different skin colors, which undoubtedly exacerbates existing social discrimination.

Although Google claims to have extensively tested PaliGemma2 and that it performed well on some benchmarks, experts remain seriously skeptical. They believe that limited testing cannot comprehensively assess the ethical risks that this technology may pose.

Most dangerously, this open model could be misused in critical areas such as employment, education, and law enforcement, potentially causing real harm to vulnerable groups. As Professor Wachter warns, this could lead to a terrifying "out of control" future: people's employment, loan, and educational opportunities could be determined by the "emotional judgments" of an unreliable AI system.

In today's rapidly developing field of artificial intelligence, while technological innovation is certainly important, ethics and safety must not be overlooked. The emergence of PaliGemma2 highlights once again the need for a vigilant and critical examination of AI technology.