As enterprises continue to explore integrating AI into every corner, the concept of Affective AI has begun to quietly emerge. A report from PitchBook indicates that this technology is experiencing a surge in popularity, becoming the next trend in emerging enterprise software.

So, what exactly is Affective AI? In simple terms, it aims to help AI assistants understand human emotions, enabling them to interact with people in a more empathetic manner. If companies deploy AI assistants among employees and management, using chatbots as salespeople and customer service representatives, how can these AI systems differentiate between an angry inquiry and a puzzled question when faced with skepticism?

Robot

Image source note: The image was generated by AI, authorized by the service provider Midjourney

Affective AI is hailed as the "advanced version" of emotion analysis, moving beyond mere text analysis (which was the early approach, mainly used to analyze emotions on social media). Instead, Affective AI combines various inputs such as visual and auditory data, leveraging machine learning and psychology to interpret human emotions during interactions.

Currently, many large cloud service providers have introduced services related to Affective AI, such as Microsoft's Azure Cognitive Services and Amazon's Rekognition (although the latter has faced numerous controversies). Although Affective AI is not new, with the proliferation of AI assistants in the workplace, this technology seems to have more commercial potential than ever before.

Derek Hernandez, a senior analyst at PitchBook, noted in the report: "In the context of AI assistants and fully automated human-machine interactions, Affective AI is expected to achieve more human-like understanding and responses."

To achieve this goal, cameras and microphones have become essential "hardware assistants" for Affective AI. Whether on laptops, smartphones, or specially arranged spaces, these devices support the operation of Affective AI. Additionally, future wearable devices may also open up new areas of application for Affective AI (so when a customer service robot asks for camera permissions, it might be for this reason).

To ride this wave, many startups have emerged, including Uniphore (which has raised $610 million, including a $400 million investment from NEA in 2022), MorphCast, Voicesense, ceed, Siena AI, audEERING, and Opsis, all of which have received substantial funding from various venture capitalists.

However, the idea of Affective AI is典型的硅谷风格: using technology to solve human problems caused by technology. Even if most AI assistants are likely to have some form of "automatic empathy" in the future, this does not necessarily mean that this solution will work.

In fact, Affective AI faced a setback in 2019 when a group of researchers published a study review, stating that facial expressions alone cannot accurately determine human emotions. In other words, trying to teach AI to recognize emotions by mimicking human methods (such as reading faces, body language, and tone) is itself a misunderstanding.

Moreover, regulations like the EU's AI Act could also pose obstacles to the implementation of this idea. The Act prohibits the use of computer vision emotion detection systems in specific scenarios such as education, and some state laws (like Illinois' BIPA) also ban the collection of biometric data without consent.

All of this provides a broader perspective on the current AI future being built frantically in Silicon Valley. Whether these AI assistants can grasp emotional understanding to perform tasks like customer service, sales, and human resources, or whether we will ultimately face an office life with AI assistants that are not as advanced as Siri in 2023, it's hard to say which scenario is more disappointing.