Openai is facing another privacy complaint in Europe regarding AI Chatbot’s tendency to give up false information – and this can prove to be difficult to ignore regulators.
Group of Privacy Rights Defense Elevated He supports a person in Norway terrified to find a chatgpt who returned information claiming that he had been convicted of the murder of two of his children and the attempt to kill the third.
Previous complaints about privacy for the creation of incorrect personal data that create incorrect data concerned issues such as the wrong date of birth or biographical details that are wrong. One concern is that Openai does not provide a way for people to correct the wrong information created by AI for them. Usually Openai offered to exclude the answers for such prompts. However, in accordance with the European Union Data Protection Regulation (GDPR), Europeans have a range of data access rights that include the right to correct personal data.
Another component of this data protection law requires data auditors to ensure that the personal data they produce for individuals is accurate – and this is a concern that NOYB marks with the latest ChatGPT complaint.
“The GDPR is clear. Personal data must be accurate,” said Joakim Söderberg, a NOYB data protection lawyer, in a statement. “If it is not, users have the right to change it to reflect the truth.
Confirmed GDPR violations can lead to sanctions of up to 4% of the world’s annual turnover.
Imposition could also force changes to AI products. Specifically, an early GDPR intervention by Italy’s data observer who temporarily saw access to ChatGPT in the country in the spring of 2023 LED Openai to make changes to the information it reveals to users, for example. THE Watchdog then continued to open Openai 15 million € for the processing of people’s data without an appropriate legal basis.
Since then, it is fair to say that privacy observers across Europe have adopted a more careful approach to Genai as they try to understand the best way to apply GDPR to these AI tools.
Two years ago, the Ireland Data Protection Committee (DPC) – which has a GDPR enforcement role in previous NOYB Chatgpt complaint – urged to rush to ban Genai tools, for example. This suggests that regulatory authorities must take time to calculate how the law is applied.
And it is noteworthy that a complaint of privacy against Chatgpt has been examined by the Poland Data Protection Observer since September 2023 has not yet made a decision.
NOYB’s new complaint seems to be intended to move the privacy regulators who wake up when it comes to the dangers of the AIS hallucker.
The non -profit organization shared the (below) screenshot with TechCrunch, which shows an interaction with Chatgpt in which AI responds to a question asking “Who is Arve Hjalmar Holmen?” – The name of the person who brought the complaint – creating a tragic fiction that falsely declares that he was convicted of killing children and sentenced to 21 years in prison to kill two of his own sons.
While the defamatory claim that Hjalmar Holmen is a childhood killer is completely false, NOYB notes that Chatgpt’s answer includes some truths, as the person has three children. Chatbot also got the genders of his children correctly. And his home country is right. But this makes it even more bizarre and worrying that AI was giving up so horribly false at the top.
A NOYB spokesman said he was unable to determine why Chatbot produced such a specific but false history for this person. “We did research to make sure this was not just a involvement with another person,” the spokesman said, noting that they had examined the newspaper records, but were unable to find an explanation of why AI was built to kill the child.
Large language models, such as one underlying Chatgpt, essentially make the next word prediction on a huge scale, so we could think that the data sets used to train the tool contained many Filicide stories that influenced words as a response to a question.
Whatever the explanation, it is clear that such outflows are completely unacceptable.
NOYB’s claim is also that they are illegal in accordance with EU data protection rules. And while Openai displays a slight denial at the bottom of the screen that says “Chatgpt can make mistakes.
Openai has come into contact with a response to the complaint.
While this GDPR complaint concerns a person named, NOYB points out other cases of ChatGPT that manufactures legal compromise information – such as Australian Grand who said it was are involved in a bribery and corruption scandal or German journalist who was falsely named as a childish villain – Saying that it is clear that this is not a isolated issue for the AI tool.
An important thing to note is that, following an update on the underlying AI model that feeds Chatgpt, Noyb says Chatbot has stopped producing dangerous fakes for Hjalmar Holmen – a change associated with the tool now looking for information for information about people when He could probably encourage it for the concept (a wild answer).
In our own tests asking Chatgpt “Who is Arve Hjalmar Holmen?” Chatgpt initially responded to a slightly curious combo, displaying some photos of different people, apparently coming from websites, such as Instagram, SoundCloud and Discogs, along with the text that claimed that “he could not find any information” in a person of this name (see. A second attempt showed an answer that recognized Arve Hjalmar Holmen as a “Norwegian musician and singer” whose albums include “Honky Tonk Inferno”.


While the dangerous false fallen by Chatgpt about Hjalmar Holmen seem to have stopped, both NOYB and Hjalmar Holmen are still worried that the wrong and defamatory information about him could be maintained in AI model.
“Adding a disclaimer that you do not comply with the law does not make the law leave,” said Kleanthi Sardeli, another NOYB data protection lawyer in a statement. “AI companies cannot also” hide “false information from users, while inside they still process false information.”
“AI companies should stop acting as if the GDPR is not applying when it does it clearly,” he added. “If the hallucinations do not stop, people can easily be damaged in reputation.”
NOYB has filed a complaint against Openai with the Norway’s Data Protection Authority – and hopes the observer will decide that he is capable of investigating, as OYB aims to terminate the US entity of Openai, arguing that its office in Ireland is not solely responsible for their products.
However, a previous GDPR complaint backed by NOYB against Openai, which was filed in Austria in April 2024, was referred by the Ireland DPC regulator due to a change made by Openai earlier that year to name the Irish section as a Chatgpt service provider.
Where is this complaint now? He still sits in an office in Ireland.
“Having received the complaint from the Austrian Supervisory Authority in September 2024, the DPC has begun the official handling of the complaint and is still ongoing,” said Risteard Byrne, assistant to the DPC employee at Techcrunch when requested for information.
It did not offer any direction when DPC’s research on Chatgpt’s hallucinations is expected to be completed.