– Google’s new AI Overview feature criticized for delivering misleading, inaccurate, and bizarre answers
– Examples of errors include conspiracy theories, plagiarism, and misinformation
– Google CEO acknowledges the issue of AI hallucinations but does not provide a timeline for a solution
Google recently launched the AI Overview feature, which aims to provide users with AI-generated summaries of search results. However, the feature has been criticized for delivering misleading, inaccurate, and bizarre answers. Examples shared on social media include citing satirical articles as factual and suggesting conspiracy theories. Some errors include plagiarizing text from blogs and suggesting pythons are mammals.
Google has acknowledged the issue, stating that the mistakes occurred on uncommon queries and are not representative of most users’ experiences. The exact cause of the problem remains unclear, with possibilities including the AI’s tendency to “hallucinate” or the sources Google uses to generate summaries. Google CEO, Sundar Pichai, has addressed the issue of AI hallucinations but has not provided a timeline for a solution.
This is not the first time Google has faced criticism over its AI products. Earlier this year, the Gemini AI came under fire for generating historically inaccurate images. In response, Google publicly apologized and temporarily suspended Gemini’s ability to generate images of people. AI Overview has also been criticized by website owners and the marketing community for potentially shifting users away from traditional search engine results to solely relying on AI-generated snippets. Overall, there are concerns surrounding the accuracy and reliability of Google’s AI Overview feature.