A Norwegian man, Arve Hjalmar Holmen, has filed a formal complaint with the Norwegian Data Protection Authority after ChatGPT, an AI chatbot developed by OpenAI, falsely claimed he had killed his two sons and served a 21-year prison sentence. The incident highlights the growing concern over AI “hallucinations,” where generative AI systems invent false information and present it as fact.
Mr. Holmen discovered the fabricated story when he searched his name on ChatGPT in August 2024. The chatbot responded with a detailed but entirely false account, stating that he was the father of two boys, aged 7 and 10, who were found dead in a pond near their home in Trondheim, Norway, in December 2020. While the age gap between the fictional children was roughly accurate, the rest of the information was entirely fabricated.
“This is very damaging to me,” Mr. Holmen said. “Some think that there is no smoke without fire—the fact that someone could read this output and believe it is true is what scares me the most.”
Digital rights group Noyb, which filed the complaint on Mr. Holmen’s behalf, argues that the false information is defamatory and violates European data protection laws, which require personal data to be accurate. Noyb emphasized that Mr. Holmen has never been accused or convicted of any crime and is a law-abiding citizen.
ChatGPT includes a disclaimer stating that it can make mistakes and users should verify important information. However, Noyb lawyer Joakim Söderberg called this insufficient, stating, “You can’t just spread false information and in the end add a small disclaimer saying that everything you said may just not be true.”
AI hallucinations remain a significant challenge for developers. Earlier this year, Apple suspended its AI news summary tool in the UK after it generated false headlines, and Google’s AI, Gemini, has also produced bizarre and inaccurate responses. Despite ongoing research, the exact cause of these hallucinations in large language models remains unclear.
Since the incident, ChatGPT has updated its model to search current news articles for relevant information. However, Noyb criticized OpenAI for its lack of transparency, calling large language models a “black box” and noting that the company does not respond to data access requests. This case underscores the need for greater accountability and accuracy in AI systems to prevent harm to individuals.