Close Menu
TechTost
  • AI
  • Apps
  • Crypto
  • Fintech
  • Hardware
  • Media & Entertainment
  • Security
  • Startups
  • Transportation
  • Venture
  • Recommended Essentials
What's Hot

Cloud AI startup Runpod hits $120M in ARR — and it started with a Reddit post

Chinese electric vehicles are closing in on the US as Canada slashes tariffs

From OpenAI offices to Eli Lilly deal – how Chai Discovery became one of the most impressive names in AI drug development

Facebook X (Twitter) Instagram
  • About Us
  • Contact Us
  • Privacy Policy
  • Terms and Conditions
  • Disclaimer
Facebook X (Twitter) Instagram
TechTost
Subscribe Now
  • AI

    From OpenAI offices to Eli Lilly deal – how Chai Discovery became one of the most impressive names in AI drug development

    16 January 2026

    Anthropic taps former Microsoft India Director to lead Bengaluru expansion

    16 January 2026

    Taiwan to invest $250 billion in US semiconductor manufacturing

    15 January 2026

    Mira Murati’s startup Thinking Machines Lab is losing two of its co-founders to OpenAI

    15 January 2026

    Musk denies knowledge of underage Grok sex images as California AG begins investigation

    14 January 2026
  • Apps

    TikTok is quietly launching a micro-drama app called ‘PineDrama’

    16 January 2026

    Google’s Trends Explore page gets new Gemini features

    16 January 2026

    After Italy, WhatsApp exempts Brazil from rival chatbot ban

    15 January 2026

    App downloads decline again in 2025, but consumer spending jumps to nearly $156 billion

    15 January 2026

    Netflix’s first original video podcasts feature Pete Davidson and Michael Irvin

    14 January 2026
  • Crypto

    Hackers stole over $2.7 billion in crypto in 2025, data shows

    23 December 2025

    New report examines how David Sachs may benefit from Trump administration role

    1 December 2025

    Why Benchmark Made a Rare Crypto Bet on Trading App Fomo, with $17M Series A

    6 November 2025

    Solana co-founder Anatoly Yakovenko is a big fan of agentic coding

    30 October 2025

    MoviePass opens Mogul fantasy league game to the public

    29 October 2025
  • Fintech

    Fintech firm Betterment confirms data breach after hackers sent fake crypto scam alert to users

    12 January 2026

    Flutterwave buys Nigeria’s Mono in rare African fintech exit

    5 January 2026

    Even as global crop prices fall, India’s Arya.ag attracts investors – and remains profitable

    2 January 2026

    These 21-year-old school dropouts raise $2 million to launch Givefront, a fintech for nonprofits

    18 December 2025

    Google deepens consumer loyalty drive in India with UPI-linked card

    17 December 2025
  • Hardware

    US slaps 25% tariffs on Nvidia’s H200 AI chips headed to China

    15 January 2026

    The weirdest tech announced at CES 2026

    15 January 2026

    Google’s Gemini will power Apple’s AI features like Siri

    14 January 2026

    Pebble founder says his new company ‘isn’t a startup’

    14 January 2026

    The ring founder details the era of the camera company’s “smart assistants.”

    13 January 2026
  • Media & Entertainment

    YouTube relaxes monetization guidelines for some controversial topics

    16 January 2026

    Bandcamp takes a stand against AI music, banning it from the platform

    15 January 2026

    Paramount filed a lawsuit against Warner Bros. amid the controversial Netflix merger

    13 January 2026

    Netflix had a huge night at the 2026 Golden Globes with 7 wins

    12 January 2026

    Spotify lowers monetization limit for video podcasts

    8 January 2026
  • Security

    Iran’s internet shutdown is now one of the longest as protests continue

    16 January 2026

    AI security company depthfirst announces $40M Series A

    14 January 2026

    Man pleads guilty to hacking US Supreme Court filing system

    14 January 2026

    Internet crashes in Iran amid protests over financial crisis

    9 January 2026

    Critics scrutinize spyware maker NSO’s transparency claims amid push to enter US market

    9 January 2026
  • Startups

    Cloud AI startup Runpod hits $120M in ARR — and it started with a Reddit post

    16 January 2026

    Parloa triples valuation in 8 months to $3 billion with $350 million raise

    16 January 2026

    AI video startup Higgsfield, founded by ex-Snap exec, valued at $1.3 billion

    15 January 2026

    India’s Emversity Doubles Valuation as It Scales Workers AI Can’t Replace

    15 January 2026

    Digg is launching its new rival Reddit to the public

    14 January 2026
  • Transportation

    Chinese electric vehicles are closing in on the US as Canada slashes tariffs

    16 January 2026

    Tesla will only offer subscriptions for full self-driving (Supervision) in the future.

    15 January 2026

    The FTC’s data-sharing order against GM was finally settled

    15 January 2026

    The American cargo technology company has publicly exposed its shipping systems and customer data on the web

    14 January 2026

    New York’s governor paves the way for robotaxis everywhere, with one notable exception

    13 January 2026
  • Venture

    Tiger Global loses India tax case linked to Walmart-Flipkart deal in blow to offshore playbook

    15 January 2026

    The super-organization is raising $25 million to support biodiversity startups

    13 January 2026

    These Gen Zers just raised $11.75 million to put Africa’s defense back in the hands of Africans

    12 January 2026

    The venture firm that ate up Silicon Valley just raised another $15 billion

    9 January 2026

    Why This VC Thinks 2026 Will Be ‘The Year of the Consumer’

    8 January 2026
  • Recommended Essentials
TechTost
You are at:Home»AI»‘Embarrassing and wrong’: Google admits it lost control of image-generating AI
AI

‘Embarrassing and wrong’: Google admits it lost control of image-generating AI

techtost.comBy techtost.com24 February 202406 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Email
'embarrassing And Wrong': Google Admits It Lost Control Of Image Generating
Share
Facebook Twitter LinkedIn Pinterest Email

Google apologized (or came very close to apologizing) for another embarrassing AI blunder this week, an image creation model that injected diversity into photos with a mocking disregard for historical context. While the underlying issue is completely understandable, Google accuses the model of “becoming” oversensitive. But the model didn’t make itself, guys.

The AI ​​system in question is Gemini, the company’s flagship conversational AI platform, which when prompted calls upon a version of the Imagen 2 model to generate images on demand.

Recently, however, people discovered that asking it to create images of certain historical conditions or people produced funny results. For example, the Founding Fathers, who we know were white slave owners, were portrayed as a multicultural group, including people of color.

This embarrassing and easily reproduced issue was quickly addressed by commenters online. It also, predictably, joined the ongoing debate about diversity, equality and inclusion (currently to a minimal local notoriety) and was seized upon by pundits as evidence of the vigilantism virus further infiltrating the already liberal tech sector.

Image Credits: An image created by Twitter user Patrick Ganley.

PPC has gone mad, visibly worried citizens shouted. This is Biden’s America! Google is an “ideological echo chamber”, a stalking horse of the left! (The left, it must be said, was also suitably disturbed by this strange phenomenon.)

But as anyone tech-savvy could tell you, and as Google explains in its relatively lame little apology post today, this problem was the result of a fairly reasonable fix for systemic bias in the training data.

Let’s say you want to use Gemini to create a marketing campaign, and you ask it to generate 10 images of “a person walking a dog in a park.” Because you don’t specify the type of person, dog, or park, it’s up to the dealer — the production model will reveal what they’re most familiar with. And in many cases, this is a product not of reality, but of training data, which can have all kinds of biases.

What kinds of people, and for that matter dogs and parks, are most common in the thousands of related images the model has ingested? The fact is that white people are overrepresented in many of these image collections (stock images, royalty-free photography, etc.), and as a result the model will default to white people in many cases if you don’t specify it.

This is just an artifact of the training data, but as Google points out, “because our users come from all over the world, we want it to work well for everyone. If you ask for a photo of football players or someone walking a dog, you might want to get a range of people. You probably don’t want to only get images of people of one type of ethnicity (or any other characteristic).”

Illustration of a group of recently laid off people holding boxes.

Imagine asking for an image like this — what if it was all one type of person? Bad result! Image Credits: Getty Images / victorikart

Nothing wrong with taking a picture of a white man walking a golden retriever in a suburban park. But if you ask for 10, and it is all white guys walking gold in suburban parks? And do you live in Morocco, where people, dogs and parks look different? This is simply not a desirable outcome. If one does not specify a feature, the model should choose variety rather than homogeneity, despite how its training data might bias it.

This is a common problem in all kinds of production media. And there is no simple solution. But in cases that are particularly common, sensitive, or both, companies like Google, OpenAI, Anthropic, and so on invisibly include extra instructions for the model.

I cannot stress enough how common this type of implicit command is. The entire LLM ecosystem is built on implicit instructions — system prompts, as they’re sometimes called, where things like “be concise,” “don’t swear,” and other instructions are given to the model before each conversation. When you ask for a joke, you don’t get a racist one — because even though the model has swallowed thousands of them, she’s also been trained, like most of us, not to tell them. This isn’t a secret agenda (though it could do with more transparency), it’s infrastructure.

Where Google’s model went wrong was that it lacked implicit guidance for situations where historical context was important. So while a prompt like “a person walking a dog in a park” is improved by the silent addition of “the person is of random gender and ethnicity” or whatever else they put in, “the founding fathers of the US who sign the Constitution” certainly it is not improved by the same.

As Google SVP Prabhakar Raghavan put it:

First, our tuning to ensure that Gemini showed a range of people failed to account for cases that clearly shouldn’t show a range. And second, over time, the model became much more cautious than we intended and refused to respond entirely to some prompts—misinterpreting some very painless prompts as sensitive.

These two things caused the model to overcompensate in some cases and be overly conservative in others, leading to images that were annoying and incorrect.

I know how hard it is to say “sorry” sometimes, so I forgive Raghavan for stopping short of this. More important is some interesting language there: “The model became much more careful than we intended.”

Now, how would anything “become” a model? It’s software. Someone—Google’s thousands of engineers—built it, tested it, iterated. Someone wrote the implicit instructions that improved some answers and made others fail hilariously. When that failed, if someone could have inspected the full message, they probably would have found what the Google team did wrong.

Google accuses the model of “happening” something it wasn’t “intended” to do. But they made the model! It’s like they broke a glass and instead of saying “we dropped it”, they say “it fell”. (I have done this.)

The errors of these models are certainly inevitable. They hallucinate, reflect prejudices, behave in unexpected ways. But the blame for these mistakes doesn’t lie with the models – it lies with the people who made them. Today it’s Google. Tomorrow will be OpenAI. The next day, and probably for a few months straight, will be X.AI.

These companies have a vested interest in convincing you that AI makes its own mistakes. Don’t let them.

admits control creating images Embarrassing Gemini Generative AI Google imagegenerating lost PPC wrong
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticleTreating a chatbot well can boost its performance — here’s why
Next Article As Techstars revamps itself, some former employees say it has lost focus on what made it successful
bhanuprakash.cg
techtost.com
  • Website

Related Posts

From OpenAI offices to Eli Lilly deal – how Chai Discovery became one of the most impressive names in AI drug development

16 January 2026

Anthropic taps former Microsoft India Director to lead Bengaluru expansion

16 January 2026

Google’s Trends Explore page gets new Gemini features

16 January 2026
Add A Comment

Leave A Reply Cancel Reply

Don't Miss

Cloud AI startup Runpod hits $120M in ARR — and it started with a Reddit post

16 January 2026

Chinese electric vehicles are closing in on the US as Canada slashes tariffs

16 January 2026

From OpenAI offices to Eli Lilly deal – how Chai Discovery became one of the most impressive names in AI drug development

16 January 2026
Stay In Touch
  • Facebook
  • YouTube
  • TikTok
  • WhatsApp
  • Twitter
  • Instagram
Fintech

Fintech firm Betterment confirms data breach after hackers sent fake crypto scam alert to users

12 January 2026

Flutterwave buys Nigeria’s Mono in rare African fintech exit

5 January 2026

Even as global crop prices fall, India’s Arya.ag attracts investors – and remains profitable

2 January 2026
Startups

Cloud AI startup Runpod hits $120M in ARR — and it started with a Reddit post

Parloa triples valuation in 8 months to $3 billion with $350 million raise

AI video startup Higgsfield, founded by ex-Snap exec, valued at $1.3 billion

© 2026 TechTost. All Rights Reserved
  • About Us
  • Contact Us
  • Privacy Policy
  • Terms and Conditions
  • Disclaimer

Type above and press Enter to search. Press Esc to cancel.