1. Google paused Gemini’s ability to generate images of people due to historical inaccuracies and complaints from users.
2. The promised fix for Gemini’s image generation issues has yet to appear, despite assurances from Google’s CEO and DeepMind’s co-founder.
3. The problem is likely complex due to biases in the data sets used to train image generators like Gemini, and Google is struggling to find a middle path that avoids repeating negative stereotypes.
In February, Google paused its AI-powered chatbot Gemini’s ability to generate images of people due to historical inaccuracies. Despite promises of a quick fix, the issue remains unresolved and the feature remains disabled. While Google showcased other Gemini features at its I/O developer conference, image generation of people is still turned off in Gemini apps on web and mobile.
The holdup in fixing this issue may be more complex than initially thought. Data sets used to train image generators like Gemini often contain more images of white people, leading to biases and reinforcing negative stereotypes. Google’s attempts to correct for these biases through hardcoded diversity additions have not been successful, leaving the company searching for a middle ground that avoids repeating history.
It is unclear if Google will be able to resolve the issue completely, highlighting the difficulty in correcting bias in AI systems. This serves as a reminder that addressing misbehaving AI, especially when bias is involved, is a challenging task that may not have a simple solution.