Recently, ChatGPT users have noticed a bizarre phenomenon: when asked about specific names like "David Mayer," the chatbot either freezes or refuses to respond. This strange behavior has quickly sparked widespread speculation and discussion.
Upon investigation, the names that cannot be mentioned include Brian Hood, Jonathan Turley, and David Faber, among others. At first glance, they seem unrelated, but a closer analysis reveals that these individuals are linked to certain public events or privacy issues.
For example, Brian Hood, an Australian mayor, accused ChatGPT of mistakenly associating him with criminal activities from decades ago. Jonathan Turley, a legal expert, has faced harassment from "false alarms." David Faber is a well-known CNBC journalist.
Image Source Note: The image was generated by AI, provided by the service provider Midjourney.
Most notably, the name David Mayer stands out. Investigations reveal that he was a deceased scholar from the UK and the US, who had been troubled for years due to a fugitive using his name as an alias.
OpenAI eventually responded to this incident, acknowledging that there are indeed internal privacy tools that flag certain specific names. However, the company declined to disclose further details, citing privacy protection concerns.
The most likely explanation behind this incident is that the AI model maintains a special list internally, employing a unique processing mechanism for certain names. This mechanism may cause the chat system to crash due to technical malfunctions.
Experts believe this serves as another vivid illustration of the complexity of AI systems. It reminds us that while AI may seem omnipotent, it is actually driven by complex codes and rules, which can lead to unexpected technical failures at any moment.
For users, this incident reaffirms a simple truth: when seeking information, direct verification is always the most reliable method. No matter how intelligent AI may be, it remains a tool that requires constant improvement and correction.