Close Menu
TechTost
  • AI
  • Apps
  • Crypto
  • Fintech
  • Hardware
  • Media & Entertainment
  • Security
  • Startups
  • Transportation
  • Venture
  • Recommended Essentials
What's Hot

The Washington Post retreats from Silicon Valley when it matters most

One of Europe’s biggest universities was offline for days after the cyber attack

a16z VC wants founders to stop stressing about crazy ARR numbers

Facebook X (Twitter) Instagram
  • About Us
  • Contact Us
  • Privacy Policy
  • Terms and Conditions
  • Disclaimer
Facebook X (Twitter) Instagram
TechTost
Subscribe Now
  • AI

    AWS revenue continues to grow as cloud demand remains high

    5 February 2026

    Sam Altman tested Claude’s Super Bowl commercials brilliantly

    5 February 2026

    Alphabet won’t talk about Google-Apple AI deal, even to investors

    4 February 2026

    Exclusive: Positron Raises $230M Series B to Take on Nvidia’s AI Chips

    4 February 2026

    Lotus Health raises $35 million for AI doctor who sees patients for free

    3 February 2026
  • Apps

    Reddit sees AI search as the next big opportunity

    5 February 2026

    Tinder looks to AI to help fight dating app ‘fatigue’ and burnout

    5 February 2026

    Google’s Gemini app has surpassed 750 million monthly active users

    4 February 2026

    TikTok bounces back from drop in usage that benefited rival apps after US ownership change

    4 February 2026

    Xcode moves to agentic coding with deeper OpenAI and Anthropic integrations

    3 February 2026
  • Crypto

    Hackers stole over $2.7 billion in crypto in 2025, data shows

    23 December 2025

    New report examines how David Sachs may benefit from Trump administration role

    1 December 2025

    Why Benchmark Made a Rare Crypto Bet on Trading App Fomo, with $17M Series A

    6 November 2025

    Solana co-founder Anatoly Yakovenko is a big fan of agentic coding

    30 October 2025

    MoviePass opens Mogul fantasy league game to the public

    29 October 2025
  • Fintech

    Stripe Alumni Raise €30M Series A for Duna, Backed by Stripe and Adyen Executives

    5 February 2026

    Fintech CEO and Forbes 30 Under 30 alum indicted for alleged fraud

    3 February 2026

    How Sequoia-backed Ethos went public while rivals lagged behind

    30 January 2026

    5 days left for TechCrunch Disrupt 2026 +1 pass with 50%

    26 January 2026

    50% off +1 ends | TechCrunch

    23 January 2026
  • Hardware

    Ring brings “Search Party” feature for finding lost dogs to non-Ring camera owners

    2 February 2026

    India offers zero taxes till 2047 to attract global AI workloads

    1 February 2026

    Microsoft won’t stop buying AI chips from Nvidia, AMD even after its own is released, says Nadella

    30 January 2026

    The iPhone just had its best quarter ever

    30 January 2026

    Snap is serious about specs, spinning off AR glasses into a standalone company

    28 January 2026
  • Media & Entertainment

    The Washington Post retreats from Silicon Valley when it matters most

    6 February 2026

    Spotify is in the business of selling books and adding new audiobook features

    5 February 2026

    Amazon will begin testing AI tools for film and TV production next month

    5 February 2026

    Alexa+, Amazon’s AI assistant, is now available to everyone in the US

    4 February 2026

    Watch Club produces short video dramas and creates a social network around them

    3 February 2026
  • Security

    One of Europe’s biggest universities was offline for days after the cyber attack

    6 February 2026

    Cyber ​​tech giant Conduent’s hot air balloon data breach affects millions more Americans

    5 February 2026

    Hackers Release Personal Information Stolen During Harvard, UPenn Data Breach

    5 February 2026

    French police investigate X office in Paris, call in Elon Musk for questioning

    4 February 2026

    Homeland Security is trying to force tech companies to hand over data about Trump critics

    4 February 2026
  • Startups

    a16z VC wants founders to stop stressing about crazy ARR numbers

    6 February 2026

    Lunar Energy raises $232 million to develop home batteries that support the grid

    5 February 2026

    Meet Gizmo: A TikTok for vibe-coded interactive mini-apps

    5 February 2026

    India’s Varaha wins $20M to scale up carbon removal from Global South

    4 February 2026

    Epstein-Linked Longevity Guru Peter Attia Leaves David Protein, His Own Startup ‘Will Not Comment’

    4 February 2026
  • Transportation

    Apeiron Labs Takes $9.5M to Flood Oceans with Autonomous Underwater Robots

    5 February 2026

    Uber appoints new CFO as its AV plans accelerate

    5 February 2026

    Skyryse lands another $300 million to make flying, even helicopters, simple and safe

    4 February 2026

    China is leading the fight against hidden car door handles

    3 February 2026

    Waymo raises $16 billion to scale robotaxi fleet globally

    3 February 2026
  • Venture

    Sapiom Raises $15M to Help AI Agents Buy Their Own Tech Tools

    6 February 2026

    What a16z actually funds (and what it ignores) when it comes to AI infra

    5 February 2026

    Plans 2026: What’s Next for Startup Battlefield 200

    4 February 2026

    Minneapolis tech community holds strong in ‘tense and difficult times’

    4 February 2026

    Two Stanford students launch $2 million startup accelerator for students nationwide

    3 February 2026
  • Recommended Essentials
TechTost
You are at:Home»AI»‘Embarrassing and wrong’: Google admits it lost control of image-generating AI
AI

‘Embarrassing and wrong’: Google admits it lost control of image-generating AI

techtost.comBy techtost.com24 February 202406 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Email
'embarrassing And Wrong': Google Admits It Lost Control Of Image Generating
Share
Facebook Twitter LinkedIn Pinterest Email

Google apologized (or came very close to apologizing) for another embarrassing AI blunder this week, an image creation model that injected diversity into photos with a mocking disregard for historical context. While the underlying issue is completely understandable, Google accuses the model of “becoming” oversensitive. But the model didn’t make itself, guys.

The AI ​​system in question is Gemini, the company’s flagship conversational AI platform, which when prompted calls upon a version of the Imagen 2 model to generate images on demand.

Recently, however, people discovered that asking it to create images of certain historical conditions or people produced funny results. For example, the Founding Fathers, who we know were white slave owners, were portrayed as a multicultural group, including people of color.

This embarrassing and easily reproduced issue was quickly addressed by commenters online. It also, predictably, joined the ongoing debate about diversity, equality and inclusion (currently to a minimal local notoriety) and was seized upon by pundits as evidence of the vigilantism virus further infiltrating the already liberal tech sector.

Image Credits: An image created by Twitter user Patrick Ganley.

PPC has gone mad, visibly worried citizens shouted. This is Biden’s America! Google is an “ideological echo chamber”, a stalking horse of the left! (The left, it must be said, was also suitably disturbed by this strange phenomenon.)

But as anyone tech-savvy could tell you, and as Google explains in its relatively lame little apology post today, this problem was the result of a fairly reasonable fix for systemic bias in the training data.

Let’s say you want to use Gemini to create a marketing campaign, and you ask it to generate 10 images of “a person walking a dog in a park.” Because you don’t specify the type of person, dog, or park, it’s up to the dealer — the production model will reveal what they’re most familiar with. And in many cases, this is a product not of reality, but of training data, which can have all kinds of biases.

What kinds of people, and for that matter dogs and parks, are most common in the thousands of related images the model has ingested? The fact is that white people are overrepresented in many of these image collections (stock images, royalty-free photography, etc.), and as a result the model will default to white people in many cases if you don’t specify it.

This is just an artifact of the training data, but as Google points out, “because our users come from all over the world, we want it to work well for everyone. If you ask for a photo of football players or someone walking a dog, you might want to get a range of people. You probably don’t want to only get images of people of one type of ethnicity (or any other characteristic).”

Illustration of a group of recently laid off people holding boxes.

Imagine asking for an image like this — what if it was all one type of person? Bad result! Image Credits: Getty Images / victorikart

Nothing wrong with taking a picture of a white man walking a golden retriever in a suburban park. But if you ask for 10, and it is all white guys walking gold in suburban parks? And do you live in Morocco, where people, dogs and parks look different? This is simply not a desirable outcome. If one does not specify a feature, the model should choose variety rather than homogeneity, despite how its training data might bias it.

This is a common problem in all kinds of production media. And there is no simple solution. But in cases that are particularly common, sensitive, or both, companies like Google, OpenAI, Anthropic, and so on invisibly include extra instructions for the model.

I cannot stress enough how common this type of implicit command is. The entire LLM ecosystem is built on implicit instructions — system prompts, as they’re sometimes called, where things like “be concise,” “don’t swear,” and other instructions are given to the model before each conversation. When you ask for a joke, you don’t get a racist one — because even though the model has swallowed thousands of them, she’s also been trained, like most of us, not to tell them. This isn’t a secret agenda (though it could do with more transparency), it’s infrastructure.

Where Google’s model went wrong was that it lacked implicit guidance for situations where historical context was important. So while a prompt like “a person walking a dog in a park” is improved by the silent addition of “the person is of random gender and ethnicity” or whatever else they put in, “the founding fathers of the US who sign the Constitution” certainly it is not improved by the same.

As Google SVP Prabhakar Raghavan put it:

First, our tuning to ensure that Gemini showed a range of people failed to account for cases that clearly shouldn’t show a range. And second, over time, the model became much more cautious than we intended and refused to respond entirely to some prompts—misinterpreting some very painless prompts as sensitive.

These two things caused the model to overcompensate in some cases and be overly conservative in others, leading to images that were annoying and incorrect.

I know how hard it is to say “sorry” sometimes, so I forgive Raghavan for stopping short of this. More important is some interesting language there: “The model became much more careful than we intended.”

Now, how would anything “become” a model? It’s software. Someone—Google’s thousands of engineers—built it, tested it, iterated. Someone wrote the implicit instructions that improved some answers and made others fail hilariously. When that failed, if someone could have inspected the full message, they probably would have found what the Google team did wrong.

Google accuses the model of “happening” something it wasn’t “intended” to do. But they made the model! It’s like they broke a glass and instead of saying “we dropped it”, they say “it fell”. (I have done this.)

The errors of these models are certainly inevitable. They hallucinate, reflect prejudices, behave in unexpected ways. But the blame for these mistakes doesn’t lie with the models – it lies with the people who made them. Today it’s Google. Tomorrow will be OpenAI. The next day, and probably for a few months straight, will be X.AI.

These companies have a vested interest in convincing you that AI makes its own mistakes. Don’t let them.

admits control creating images Embarrassing Gemini Generative AI Google imagegenerating lost PPC wrong
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticleTreating a chatbot well can boost its performance — here’s why
Next Article As Techstars revamps itself, some former employees say it has lost focus on what made it successful
bhanuprakash.cg
techtost.com
  • Website

Related Posts

AWS revenue continues to grow as cloud demand remains high

5 February 2026

Sam Altman tested Claude’s Super Bowl commercials brilliantly

5 February 2026

Amazon will begin testing AI tools for film and TV production next month

5 February 2026
Add A Comment

Leave A Reply Cancel Reply

Don't Miss

The Washington Post retreats from Silicon Valley when it matters most

6 February 2026

One of Europe’s biggest universities was offline for days after the cyber attack

6 February 2026

a16z VC wants founders to stop stressing about crazy ARR numbers

6 February 2026
Stay In Touch
  • Facebook
  • YouTube
  • TikTok
  • WhatsApp
  • Twitter
  • Instagram
Fintech

Stripe Alumni Raise €30M Series A for Duna, Backed by Stripe and Adyen Executives

5 February 2026

Fintech CEO and Forbes 30 Under 30 alum indicted for alleged fraud

3 February 2026

How Sequoia-backed Ethos went public while rivals lagged behind

30 January 2026
Startups

a16z VC wants founders to stop stressing about crazy ARR numbers

Lunar Energy raises $232 million to develop home batteries that support the grid

Meet Gizmo: A TikTok for vibe-coded interactive mini-apps

© 2026 TechTost. All Rights Reserved
  • About Us
  • Contact Us
  • Privacy Policy
  • Terms and Conditions
  • Disclaimer

Type above and press Enter to search. Press Esc to cancel.