Recently, OpenAI has released an updated version of the ChatGPT voice feature, based on the latest GPT-4o model, making interactions with the chatbot more natural and real-time. However, OpenAI has also expressed concerns, noting that some users may develop an emotional dependency on this voice feature. According to OpenAI's research, users who engage with the voice mode sometimes form emotional connections with ChatGPT, using expressions like "this is our last day together."

Sound Audio

These seemingly harmless expressions have prompted OpenAI to recognize the need for further research on this dependency phenomenon. They believe that while this dependency may be beneficial for isolated users, it could also lead to reduced interactions with real humans. Studies show that prolonged interactions with AI can impact social norms, such as AI's ability to interrupt users at any time, which is uncommon in real human interactions.

Additionally, GPT-4o has the capability to remember user information and preferences, which could further intensify users' dependency on it. OpenAI states that they will continue to research the potential for emotional dependency and how it might change user behavior under the deep integration of voice mode.

In terms of safety, OpenAI has conducted a detailed assessment of GPT-4o. Although the model's risks in cybersecurity, biological threats, and model autonomy are rated as "low," its persuasive abilities pose a "medium" risk. OpenAI's testing has found that while AI-generated content can slightly surpass human persuasiveness in certain situations, overall, its performance does not exceed that of humans.

The voice feature of GPT-4o shows that the influence of AI audio is about 78% of human audio, and the influence of AI conversation is 65%. In subsequent investigations, the influence of AI conversation is almost negligible, indicating that AI still has many limitations in persuading users.

Key Points:

🌐 OpenAI warns that users may develop emotional dependency on ChatGPT's voice feature, particularly leading to reduced human interactions for lonely users.

🔍 The GPT-4o model performs well in safety assessments but reaches a "medium" risk in terms of persuasive abilities.

📊 AI-generated content does not exceed human persuasiveness in tests, but still shows some influence in certain situations.