– European privacy advocacy group noyb is suing OpenAI over inaccurate information generated by ChatGPT
– ChatGPT still has issues with making up false information when it doesn’t have an answer
– OpenAI cannot or will not fix the problem and cannot provide information on where the data comes from
European privacy advocacy group noyb is suing OpenAI over inaccuracies generated by ChatGPT regarding personal information. Despite its progress, ChatGPT still tends to create false information when it lacks an answer. An unnamed public figure represented by noyb was upset when ChatGPT repeatedly provided incorrect birthdates, leading to potential serious consequences.
According to GDPR rules in Europe, information about individuals must be accurate and can be corrected or deleted upon request. OpenAI, however, claims it cannot address these issues, including where data comes from or preventing ChatGPT from producing false statements about individuals. The company acknowledges the problem but is uncertain about a solution.
noyb filed a complaint with the Austrian data protection authority, criticizing OpenAI for its inability to prevent inaccurate information from being displayed on ChatGPT. They argue that simply making up data is not a viable option and demand better responses from companies. The issue of AI models generating false information extends beyond OpenAI and will persist until models can acknowledge uncertainty and improve transparency regarding their training data.
European users must weigh the benefits of tools like ChatGPT against their GDPR rights. Currently, they cannot have both reliability and utility, raising questions about the responsibility of companies developing these AI models.