In February, Google disabled the ability of its Gemini AI-powered chatbot to create images of people after user complaints. historical inaccuracies. Claiming to depict “a Roman legion,” for example, Gemini would show an anachronistic group of racially diverse soldiers while rendering the “Zulu warriors” as stereotypically Black.
Google CEO Sundar Pichai apologized, and Demis Hassabis, the co-founder of Google’s AI research arm DeepMind, said a fix should arrive “in very short order” — within the next two weeks. But we are now well into May and the promised solution remains elusive.
Google announced several other Gemini features at its annual I/O developer conference this week, from custom chatbots to vacation itinerary planning and integrations with Google Calendar, Keep, and YouTube Music. However, creating images of people continues to be disabled in the Gemini web and mobile apps, a Google spokesperson confirmed.
So what’s the wait? Well, the problem is probably more complicated than Hassabis mentioned.
The datasets used to train image generators like Gemini’s generally contain more images of white people than people of other races and ethnicities, and the images of non-white people in these datasets strengthen negative stereotypes. Google, in an apparent attempt to correct these biases, clumsily implemented hard-coding under the hood. And now he’s struggling to find some reasonable middle ground that eludes him repeating history.
Will Google get there? Maybe. Maybe not. In any case, the protracted case serves as a reminder that no solution to AI misbehavior is easy — especially when bias is at the root of the misbehavior.
We’re launching an AI newsletter! Sign up here to start receiving it in your inbox on June 5th.