In recent years, emotion recognition technology has gradually emerged in the tech industry. Many tech companies have launched AI-driven emotion recognition software, claiming to determine a person's emotional state through biometric data, including happiness, sadness, anger, and frustration. However, an increasing number of scientific studies indicate that the reliability of these technologies is not as advertised.

AI Robot Artificial Intelligence (1)

Image Source Note: Image generated by AI, licensed by service provider Midjourney

According to recent research, emotion recognition technology faces serious issues regarding scientific validity. Many companies claim these systems are objective and rooted in scientific methods, but in reality, they often rely on outdated theories. These theories suggest that emotions can be quantified and exhibit the same expressions globally; however, the expression of emotions is profoundly influenced by culture, environment, and individual differences. For instance, a person's skin moisture may increase, decrease, or remain unchanged when angry, making a single biometric indicator insufficient for accurately determining emotions.

Meanwhile, these emotion recognition technologies also pose legal and social risks, particularly in the workplace. Under new EU regulations, the use of AI systems for emotion inference in the workplace is prohibited unless for medical or safety reasons. In Australia, regulatory measures in this area have not kept pace. Although some companies have attempted to use facial emotion analysis in hiring, the effectiveness and ethics of these technologies have raised widespread concerns.

Moreover, emotion recognition technology also faces potential bias issues. These systems may demonstrate discrimination when identifying emotions across different races, genders, and disability groups. For example, some studies indicate that emotion recognition systems are more likely to perceive Black faces as angry, despite similar levels of smiling from both parties.

Although tech companies acknowledge the bias issues in emotion recognition, they emphasize that such biases mainly stem from the datasets used to train these systems. In response to this issue, inTruth Technologies has stated its commitment to using diverse and inclusive datasets to reduce bias.

Public perception of emotion recognition technology is not optimistic. A recent survey showed that only 12.9% of Australian adults support the use of facial-based emotion recognition technology in the workplace, with many viewing it as an invasion of privacy.

Key Points:

🌐 The global market is rapidly growing, but the scientific basis of emotion recognition technology is under scrutiny.

⚖️ The EU has banned the use of AI systems for emotion inference in the workplace, while Australia urgently needs to establish relevant regulations.

🤖 The public generally holds a negative attitude towards emotion recognition technology, considering it an invasion of privacy and biased.