OpenAI is once again facing scrutiny in Europe over its AI chatbot, ChatGPT, which continues to generate false and damaging information. A new privacy complaint, backed by the privacy rights group Noyb, has been filed with the Norwegian data protection authority after an individual discovered that the chatbot falsely claimed he had been convicted of murdering two of his children and attempting to kill the third.
This latest complaint highlights ongoing concerns about ChatGPT’s tendency to produce hallucinated content, including inaccurate personal information. Previous complaints focused on errors such as incorrect birth dates or biographical details, but this new case takes the issue further, raising questions about how OpenAI handles false information under the European Union’s General Data Protection Regulation (GDPR).
Joakim Söderberg, a data protection lawyer at Noyb, stressed that the GDPR requires personal data to be accurate and gives individuals the right to correct inaccuracies. “Showing ChatGPT users a tiny disclaimer that the chatbot can make mistakes clearly isn’t enough,” Söderberg stated. “You can’t just spread false information and then add a small disclaimer saying that everything you said may not be true.”
Under the GDPR, confirmed violations can result in penalties of up to 4% of a company’s global annual turnover. OpenAI, which has already faced consequences under the GDPR, including a €15 million fine from Italy’s data protection authority in 2023, could face further penalties or be forced to alter its AI products.
Despite this, European regulators have largely adopted a cautious stance on generative AI, with some watchdogs, including Ireland’s Data Protection Commission (DPC), advocating for a measured approach. In the case of ChatGPT, the DPC has not yet reached a conclusion on an ongoing investigation into a previous complaint.
The most recent complaint stems from an incident involving Arve Hjalmar Holmen, a Norwegian man who was horrified to discover that ChatGPT had fabricated a detailed, false account of his life. In response to the query “who is Arve Hjalmar Holmen?”, ChatGPT produced a tragic narrative stating that Holmen had been convicted of murdering two of his children and sentenced to 21 years in prison. Although some details about his children and hometown were correct, the fabricated murder story raised serious privacy and legal concerns.
Noyb’s investigation into the incident found no reason for ChatGPT to make such a claim. “We did research to make sure that this wasn’t just a mix-up with another person,” a spokesperson said. The group speculates that ChatGPT’s data might have been influenced by stories of filicide, given the nature of its language model, which predicts the next word based on vast datasets.
The complaint underscores concerns about the AI’s duty to prevent the generation of harmful falsehoods. Although OpenAI includes a disclaimer that ChatGPT can make mistakes, Noyb argues this is insufficient under the GDPR. “AI companies should stop acting as if the GDPR does not apply to them,” said Kleanthi Sardeli, another data protection lawyer at Noyb. “If hallucinations are not stopped, people can easily suffer reputational damage.”
In response to the complaint, OpenAI has made adjustments to ChatGPT’s functionality. Following an update to the underlying AI model, the chatbot stopped producing the false information about Holmen, now providing more accurate responses when asked about him. However, Noyb and Holmen remain concerned that the erroneous data may still be retained in the model, potentially impacting future interactions.
While Noyb’s current complaint targets OpenAI’s U.S. entity, the organization hopes the Norwegian authority will take action, as OpenAI’s Irish office may not be solely responsible for product decisions impacting Europeans. An earlier Noyb-backed complaint in Austria has been referred to the DPC in Ireland, where it remains unresolved.
As privacy regulators continue to investigate AI tools, including ChatGPT, the ongoing concerns about false and defamatory content highlight the need for stronger safeguards and clearer accountability for AI companies.
Related topic:
VC Funds Face Challenges in China Amid Slumping Market