1. Google’s AI search feature mistakenly suggests that “running with scissors” is a beneficial cardio exercise.
2. Companies, including Google, conduct red teaming to identify vulnerabilities in their products before release.
3. Errors in AI responses can become viral memes, highlighting the potential dangers of misinformation spread by AI.
Google’s new AI search feature mistakenly suggested that running with scissors could provide health benefits, leading to a flood of memes mocking the failures of AI products. This highlights the importance of red teaming, where ethical hackers attempt to breach products to identify vulnerabilities before they are released. Despite extensive testing, Google still shipped products with obvious flaws, which has sparked a trend of clowning on AI failures.
Tech companies often downplay the impact of these flaws, stating that the examples are uncommon and not representative of most users’ experiences. However, some errors, such as recommending adding glue to pizza, stem from outdated or unreliable sources like an eleven-year-old Reddit comment. This blunder questions the value of AI content deals, with companies like Google having multimillion-dollar contracts with platforms like Reddit for data licensing.
When bad AI responses go viral, the AI can become even more confused by the new content generated around the topic. For example, a query asking if a dog has played in the NHL led to the AI mistakenly labeling a Calgary Flames player as a dog, perpetuating the misinformation. Training large-scale AI models on internet data can be problematic as it can be filled with inaccurate information and lies, ultimately leading to flawed AI products being shipped by tech companies.