OpenAI is facing a privacy complaint from Norway, focusing on its AI chatbot ChatGPT's tendency to generate false information. The case, supported by the privacy advocacy group Noyb, was filed by Alf Harmar Holmen, who was shocked and angered to find ChatGPT falsely claimed he was convicted of murdering two children and attempting to kill a third.
Past privacy complaints regarding ChatGPT mainly involved inaccuracies in basic personal data, such as birth dates or biographical details. A key issue is OpenAI's lack of a robust mechanism for individuals to correct AI-generated misinformation. While OpenAI typically blocks responses generating such errors, the EU's General Data Protection Regulation (GDPR) grants Europeans various data access rights, including the right to rectification.
Noyb points out that the GDPR mandates the accuracy of personal data, granting users the right to correction if information is inaccurate. Noyb lawyer, Joachim Søderberg, argues that OpenAI's simple disclaimer stating "ChatGPT may make mistakes" at the bottom is insufficient. The GDPR, according to Noyb, holds AI developers responsible for ensuring their creations don't spread serious falsehoods.
GDPR violations can result in fines up to 4% of global annual revenue. In Spring 2023, Italy's data protection authority temporarily blocked access to ChatGPT, prompting OpenAI to adjust its user information disclosures. Nevertheless, European privacy regulators have adopted a more cautious approach towards generative AI in recent years, seeking appropriate regulatory frameworks.
Noyb's new complaint aims to raise regulatory awareness of the potential dangers of AI-generated misinformation. They shared a screenshot of an interaction showing ChatGPT fabricating a completely false and disturbing history in response to questions about Holmen. This isn't an isolated incident; Noyb highlights other users suffering similar damage from false information.
Although OpenAI stopped the false accusations against Holmen after model updates, Noyb and Holmen remain concerned that the erroneous information might persist within the AI model. Noyb has filed the complaint with the Norwegian Data Protection Authority, hoping for an investigation.
Key Points:
🌐 Noyb supports a Norwegian individual's privacy complaint against ChatGPT for generating false information.
⚖️ Under the GDPR, personal data must be accurate, a requirement OpenAI failed to meet.
🔍 Noyb hopes this complaint will raise regulatory awareness of the issue of AI misinformation.