Close Menu
TechTost
  • AI
  • Apps
  • Crypto
  • Fintech
  • Hardware
  • Media & Entertainment
  • Security
  • Startups
  • Transportation
  • Venture
  • Recommended Essentials
What's Hot

Conntour Raises $7M From General Catalyst, YC To Build AI Search Engine For Security Video Systems

A little-known Croatian startup is coming to the robotaxi market with the help of Uber

BKR Capital Raises $14.5M (So Far) to Invest in Black Founders

Facebook X (Twitter) Instagram
  • About Us
  • Contact Us
  • Privacy Policy
  • Terms and Conditions
  • Disclaimer
Facebook X (Twitter) Instagram
TechTost
Subscribe Now
  • AI

    A ‘pound of flesh’ from data centers: a senator’s response to AI job losses

    26 March 2026

    Mercor competitor Deccan AI raises $25 million, India experts report

    26 March 2026

    With Sift Stack, two former SpaceX engineers are bringing the software that helped launch rockets to the factory

    25 March 2026

    OpenAI’s Sora was the creepiest app on your phone — now it’s shutting down

    25 March 2026

    Mirage raises $75M to continue building models for AI video editing app Captions

    24 March 2026
  • Apps

    WhatsApp can now design AI-generated replies based on your conversations

    26 March 2026

    Apple overhauls its app developer platform with 100 new metrics, more tools

    26 March 2026

    Talat’s AI meeting notes stay on your computer, not in the cloud

    25 March 2026

    Spotify is testing new tool to prevent artificial intelligence from attributing real artists

    25 March 2026

    Pinterest is launching a new feature for promoting a Pin

    24 March 2026
  • Crypto

    Hackers stole over $2.7 billion in crypto in 2025, data shows

    23 December 2025

    New report examines how David Sachs may benefit from Trump administration role

    1 December 2025

    Why Benchmark Made a Rare Crypto Bet on Trading App Fomo, with $17M Series A

    6 November 2025

    Solana co-founder Anatoly Yakovenko is a big fan of agentic coding

    30 October 2025

    MoviePass opens Mogul fantasy league game to the public

    29 October 2025
  • Fintech

    Doss raises $55 million for AI inventory management that connects to ERP

    24 March 2026

    Despite stiff competition, Kalshi, Polymarket CEOs back $35m VC fund projections

    23 March 2026

    Amid legal turmoil, Kalshi is temporarily banned in Nevada

    20 March 2026

    Nominations for the Startup Battlefield 200 are still open

    19 March 2026

    Kalshi’s legal woes pile up as Arizona files first criminal charges for ‘illegal gambling operation’

    17 March 2026
  • Hardware

    Arm releases the first in-house chip in its 35-year history

    24 March 2026

    Ultrahuman boosts US push with Ring Pro as Oura tightens its grip

    24 March 2026

    Amazon is working on a new smartphone with Alexa at its core, the report says

    20 March 2026

    CEO Carl Pei says nothing about smartphone apps disappearing as they’re replaced by artificial intelligence agents

    18 March 2026

    MacBook Neo, AirPods Max 2, iPhone 17e and everything else Apple announced this month

    18 March 2026
  • Media & Entertainment

    Spotify’s new SongDNA feature maps how your favorite songs are connected

    26 March 2026

    Roku’s Howdy $3 subscription service launches on Prime Video

    25 March 2026

    Apple Music partners with Ticketmaster to boost concert discovery

    25 March 2026

    Google TV’s new Gemini features keep fans updated on sports teams and more

    24 March 2026

    Tubi joins forces with popular TikTokers to create original streaming content

    19 March 2026
  • Security

    Convicted spyware boss hints Greek government was behind dozens of phone hacks

    26 March 2026

    Someone has publicly leaked an exploit kit that can hack millions of iPhones

    25 March 2026

    The FCC bans the importation of new consumer routers made abroad, citing security risks

    25 March 2026

    Crunchyroll confirms data breach after hackers claim unauthorized access

    24 March 2026

    Delve halts demos, Insight Partners sheds investment position amid ‘false compliance’ claims

    24 March 2026
  • Startups

    Conntour Raises $7M From General Catalyst, YC To Build AI Search Engine For Security Video Systems

    26 March 2026

    Delve Made Security Compliant on LiteLLM, an AI Project Hit by Malware

    26 March 2026

    After spin-off, Y Combinator grad Glimpse raises $35 million led by a16z

    25 March 2026

    Databricks has bought two startups to support its new AI security product

    25 March 2026

    Insight Partners removes investment post for Delve amid ‘false compliance’ claims.

    24 March 2026
  • Transportation

    A little-known Croatian startup is coming to the robotaxi market with the help of Uber

    26 March 2026

    Harbinger’s next product will be hybrid emergency vehicles

    25 March 2026

    Flighty’s new update gives you real-time alerts for airport disruptions

    25 March 2026

    Zoox is bringing its robotaxis to Austin and Miami

    24 March 2026

    Zipline raises another $200 million to fuel drone delivery expansion

    24 March 2026
  • Venture

    BKR Capital Raises $14.5M (So Far) to Invest in Black Founders

    26 March 2026

    Driving GLP-1 Boom, VITL Raises $7.5M to Repair Cash Clinic Prescribing

    26 March 2026

    Arinna raises $4 million to solve the space energy problem

    25 March 2026

    Accel, Prosus select six ‘off-the-map’ startups for inaugural India team

    25 March 2026

    Startup Gimlet Labs solves the AI ​​inference problem in a surprisingly elegant way

    24 March 2026
  • Recommended Essentials
TechTost
You are at:Home»AI»‘Embarrassing and wrong’: Google admits it lost control of image-generating AI
AI

‘Embarrassing and wrong’: Google admits it lost control of image-generating AI

techtost.comBy techtost.com24 February 202406 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Email
'embarrassing And Wrong': Google Admits It Lost Control Of Image Generating
Share
Facebook Twitter LinkedIn Pinterest Email

Google apologized (or came very close to apologizing) for another embarrassing AI blunder this week, an image creation model that injected diversity into photos with a mocking disregard for historical context. While the underlying issue is completely understandable, Google accuses the model of “becoming” oversensitive. But the model didn’t make itself, guys.

The AI ​​system in question is Gemini, the company’s flagship conversational AI platform, which when prompted calls upon a version of the Imagen 2 model to generate images on demand.

Recently, however, people discovered that asking it to create images of certain historical conditions or people produced funny results. For example, the Founding Fathers, who we know were white slave owners, were portrayed as a multicultural group, including people of color.

This embarrassing and easily reproduced issue was quickly addressed by commenters online. It also, predictably, joined the ongoing debate about diversity, equality and inclusion (currently to a minimal local notoriety) and was seized upon by pundits as evidence of the vigilantism virus further infiltrating the already liberal tech sector.

Image Credits: An image created by Twitter user Patrick Ganley.

PPC has gone mad, visibly worried citizens shouted. This is Biden’s America! Google is an “ideological echo chamber”, a stalking horse of the left! (The left, it must be said, was also suitably disturbed by this strange phenomenon.)

But as anyone tech-savvy could tell you, and as Google explains in its relatively lame little apology post today, this problem was the result of a fairly reasonable fix for systemic bias in the training data.

Let’s say you want to use Gemini to create a marketing campaign, and you ask it to generate 10 images of “a person walking a dog in a park.” Because you don’t specify the type of person, dog, or park, it’s up to the dealer — the production model will reveal what they’re most familiar with. And in many cases, this is a product not of reality, but of training data, which can have all kinds of biases.

What kinds of people, and for that matter dogs and parks, are most common in the thousands of related images the model has ingested? The fact is that white people are overrepresented in many of these image collections (stock images, royalty-free photography, etc.), and as a result the model will default to white people in many cases if you don’t specify it.

This is just an artifact of the training data, but as Google points out, “because our users come from all over the world, we want it to work well for everyone. If you ask for a photo of football players or someone walking a dog, you might want to get a range of people. You probably don’t want to only get images of people of one type of ethnicity (or any other characteristic).”

Illustration of a group of recently laid off people holding boxes.

Imagine asking for an image like this — what if it was all one type of person? Bad result! Image Credits: Getty Images / victorikart

Nothing wrong with taking a picture of a white man walking a golden retriever in a suburban park. But if you ask for 10, and it is all white guys walking gold in suburban parks? And do you live in Morocco, where people, dogs and parks look different? This is simply not a desirable outcome. If one does not specify a feature, the model should choose variety rather than homogeneity, despite how its training data might bias it.

This is a common problem in all kinds of production media. And there is no simple solution. But in cases that are particularly common, sensitive, or both, companies like Google, OpenAI, Anthropic, and so on invisibly include extra instructions for the model.

I cannot stress enough how common this type of implicit command is. The entire LLM ecosystem is built on implicit instructions — system prompts, as they’re sometimes called, where things like “be concise,” “don’t swear,” and other instructions are given to the model before each conversation. When you ask for a joke, you don’t get a racist one — because even though the model has swallowed thousands of them, she’s also been trained, like most of us, not to tell them. This isn’t a secret agenda (though it could do with more transparency), it’s infrastructure.

Where Google’s model went wrong was that it lacked implicit guidance for situations where historical context was important. So while a prompt like “a person walking a dog in a park” is improved by the silent addition of “the person is of random gender and ethnicity” or whatever else they put in, “the founding fathers of the US who sign the Constitution” certainly it is not improved by the same.

As Google SVP Prabhakar Raghavan put it:

First, our tuning to ensure that Gemini showed a range of people failed to account for cases that clearly shouldn’t show a range. And second, over time, the model became much more cautious than we intended and refused to respond entirely to some prompts—misinterpreting some very painless prompts as sensitive.

These two things caused the model to overcompensate in some cases and be overly conservative in others, leading to images that were annoying and incorrect.

I know how hard it is to say “sorry” sometimes, so I forgive Raghavan for stopping short of this. More important is some interesting language there: “The model became much more careful than we intended.”

Now, how would anything “become” a model? It’s software. Someone—Google’s thousands of engineers—built it, tested it, iterated. Someone wrote the implicit instructions that improved some answers and made others fail hilariously. When that failed, if someone could have inspected the full message, they probably would have found what the Google team did wrong.

Google accuses the model of “happening” something it wasn’t “intended” to do. But they made the model! It’s like they broke a glass and instead of saying “we dropped it”, they say “it fell”. (I have done this.)

The errors of these models are certainly inevitable. They hallucinate, reflect prejudices, behave in unexpected ways. But the blame for these mistakes doesn’t lie with the models – it lies with the people who made them. Today it’s Google. Tomorrow will be OpenAI. The next day, and probably for a few months straight, will be X.AI.

These companies have a vested interest in convincing you that AI makes its own mistakes. Don’t let them.

admits control creating images Embarrassing Gemini Generative AI Google imagegenerating lost PPC wrong
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticleTreating a chatbot well can boost its performance — here’s why
Next Article As Techstars revamps itself, some former employees say it has lost focus on what made it successful
bhanuprakash.cg
techtost.com
  • Website

Related Posts

A ‘pound of flesh’ from data centers: a senator’s response to AI job losses

26 March 2026

Mercor competitor Deccan AI raises $25 million, India experts report

26 March 2026

With Sift Stack, two former SpaceX engineers are bringing the software that helped launch rockets to the factory

25 March 2026
Add A Comment

Leave A Reply Cancel Reply

Don't Miss

Conntour Raises $7M From General Catalyst, YC To Build AI Search Engine For Security Video Systems

26 March 2026

A little-known Croatian startup is coming to the robotaxi market with the help of Uber

26 March 2026

BKR Capital Raises $14.5M (So Far) to Invest in Black Founders

26 March 2026
Stay In Touch
  • Facebook
  • YouTube
  • TikTok
  • WhatsApp
  • Twitter
  • Instagram
Fintech

Doss raises $55 million for AI inventory management that connects to ERP

24 March 2026

Despite stiff competition, Kalshi, Polymarket CEOs back $35m VC fund projections

23 March 2026

Amid legal turmoil, Kalshi is temporarily banned in Nevada

20 March 2026
Startups

Conntour Raises $7M From General Catalyst, YC To Build AI Search Engine For Security Video Systems

Delve Made Security Compliant on LiteLLM, an AI Project Hit by Malware

After spin-off, Y Combinator grad Glimpse raises $35 million led by a16z

© 2026 TechTost. All Rights Reserved
  • About Us
  • Contact Us
  • Privacy Policy
  • Terms and Conditions
  • Disclaimer

Type above and press Enter to search. Press Esc to cancel.