OpenAI Faces New European Privacy Complaint Over ChatGPT Hallucinations

OpenAI Faces New Privacy Complaint in Europe Over ChatGPT's Hallucinations
OpenAI, the creator of the popular AI chatbot ChatGPT, is once again under scrutiny in Europe for its AI's tendency to generate false information, often referred to as "hallucinations." This latest complaint, supported by the privacy rights advocacy group Noyb, involves a Norwegian individual who discovered ChatGPT fabricating a deeply disturbing and false narrative about him, claiming he was convicted of murdering two of his children and attempting to kill a third.
The Case of Arve Hjalmar Holmen
The complaint stems from an interaction where an individual, Arve Hjalmar Holmen, asked ChatGPT about himself. The AI responded with a fabricated story, stating he was convicted of child murder and sentenced to 21 years in prison. While the core accusation of murder is false, the AI's response included some accurate details, such as the number and genders of his children and his hometown, making the fabricated details even more unsettling.
Noyb, a data protection lawyer, highlighted that while ChatGPT may include a disclaimer about potential inaccuracies, this is insufficient to absolve OpenAI of its responsibilities under the European Union's General Data Protection Regulation (GDPR). The GDPR mandates that personal data must be accurate, and individuals have the right to rectification. Spreading false information, even with a disclaimer, is seen as a violation of these regulations.
GDPR Implications and Potential Penalties
Violations of the GDPR can lead to significant penalties, including fines of up to 4% of a company's global annual turnover. Previous GDPR interventions, such as Italy's temporary blocking of ChatGPT access, have prompted OpenAI to make changes to its services, including how it discloses information to users. Italy's data protection watchdog also fined OpenAI for processing personal data without a proper legal basis.
However, privacy watchdogs across Europe have adopted a more cautious approach to generative AI, seeking to understand how the GDPR applies to these new technologies. This has led to a slower enforcement process, with some complaints, like one in Poland, remaining under investigation for an extended period.
OpenAI's Response and the Evolution of ChatGPT
In response to the complaint, OpenAI stated that it is continuously researching ways to improve model accuracy and reduce hallucinations. The company noted that the specific version of ChatGPT involved in the incident has since been enhanced with online search capabilities, which are expected to improve accuracy. OpenAI's PR firm provided a statement indicating that while they are reviewing the complaint, the issue relates to an older version of the chatbot.
Noyb points to other instances of ChatGPT fabricating damaging false information about individuals, including an Australian mayor implicated in a bribery scandal and a German journalist falsely named as a child abuser. These cases suggest that the issue of AI-generated falsehoods is not isolated.
The Challenge of Data Retention and Accuracy
Despite the improvements, Noyb and Arve Hjalmar Holmen remain concerned that the AI model might still retain the defamatory information. Lawyers emphasize that disclaimers do not negate legal obligations, and AI companies must ensure the accuracy of the data they process internally, not just in their public-facing responses. Failure to address these hallucinations can lead to significant reputational damage for individuals.
Noyb has filed the complaint with the Norwegian data protection authority, arguing that OpenAI's U.S. entity should be held accountable, not solely its Irish office, for product decisions affecting Europeans. This approach challenges the typical cross-border enforcement mechanisms under the GDPR, which often route complaints through the Irish Data Protection Commission (DPC) due to OpenAI's European headquarters being located there.
Ongoing Investigations and Future Outlook
The Irish DPC is currently handling a previous Noyb-backed GDPR complaint against OpenAI, which was filed in April 2024. The DPC has not provided a timeline for the conclusion of its investigation into ChatGPT's hallucinations. The outcome of these complaints could significantly influence how AI companies operate within the EU and set precedents for the regulation of generative AI technologies.
This article was updated with OpenAI's statement.
Original article available at: https://techcrunch.com/2025/03/19/chatgpt-hit-with-privacy-complaint-over-defamatory-hallucinations/