Close Menu
TechTost
  • AI
  • Apps
  • Crypto
  • Fintech
  • Hardware
  • Media & Entertainment
  • Security
  • Startups
  • Transportation
  • Venture
  • Recommended Essentials
What's Hot

Substack confirms that the data breach affects users’ email addresses and phone numbers

Fundamental raises $255 million in Series A with a new approach to big data analytics

Secondary sales are shifting from founders’ windfalls to employee retention tools

Facebook X (Twitter) Instagram
  • About Us
  • Contact Us
  • Privacy Policy
  • Terms and Conditions
  • Disclaimer
Facebook X (Twitter) Instagram
TechTost
Subscribe Now
  • AI

    Amazon and Google are winning the AI ​​capital race — but what’s the prize?

    6 February 2026

    AWS revenue continues to grow as cloud demand remains high

    5 February 2026

    Sam Altman tested Claude’s Super Bowl commercials brilliantly

    5 February 2026

    Alphabet won’t talk about Google-Apple AI deal, even to investors

    4 February 2026

    Exclusive: Positron Raises $230M Series B to Take on Nvidia’s AI Chips

    4 February 2026
  • Apps

    Meta is testing a standalone app for its AI-generated ‘Vibes’ videos

    6 February 2026

    Reddit sees AI search as the next big opportunity

    5 February 2026

    Tinder looks to AI to help fight dating app ‘fatigue’ and burnout

    5 February 2026

    Google’s Gemini app has surpassed 750 million monthly active users

    4 February 2026

    TikTok bounces back from drop in usage that benefited rival apps after US ownership change

    4 February 2026
  • Crypto

    Hackers stole over $2.7 billion in crypto in 2025, data shows

    23 December 2025

    New report examines how David Sachs may benefit from Trump administration role

    1 December 2025

    Why Benchmark Made a Rare Crypto Bet on Trading App Fomo, with $17M Series A

    6 November 2025

    Solana co-founder Anatoly Yakovenko is a big fan of agentic coding

    30 October 2025

    MoviePass opens Mogul fantasy league game to the public

    29 October 2025
  • Fintech

    Stripe Alumni Raise €30M Series A for Duna, Backed by Stripe and Adyen Executives

    5 February 2026

    Fintech CEO and Forbes 30 Under 30 alum indicted for alleged fraud

    3 February 2026

    How Sequoia-backed Ethos went public while rivals lagged behind

    30 January 2026

    5 days left for TechCrunch Disrupt 2026 +1 pass with 50%

    26 January 2026

    50% off +1 ends | TechCrunch

    23 January 2026
  • Hardware

    Ring brings “Search Party” feature for finding lost dogs to non-Ring camera owners

    2 February 2026

    India offers zero taxes till 2047 to attract global AI workloads

    1 February 2026

    Microsoft won’t stop buying AI chips from Nvidia, AMD even after its own is released, says Nadella

    30 January 2026

    The iPhone just had its best quarter ever

    30 January 2026

    Snap is serious about specs, spinning off AR glasses into a standalone company

    28 January 2026
  • Media & Entertainment

    The Washington Post retreats from Silicon Valley when it matters most

    6 February 2026

    Spotify is in the business of selling books and adding new audiobook features

    5 February 2026

    Amazon will begin testing AI tools for film and TV production next month

    5 February 2026

    Alexa+, Amazon’s AI assistant, is now available to everyone in the US

    4 February 2026

    Watch Club produces short video dramas and creates a social network around them

    3 February 2026
  • Security

    Substack confirms that the data breach affects users’ email addresses and phone numbers

    6 February 2026

    One of Europe’s biggest universities was offline for days after the cyber attack

    6 February 2026

    Cyber ​​tech giant Conduent’s hot air balloon data breach affects millions more Americans

    5 February 2026

    Hackers Release Personal Information Stolen During Harvard, UPenn Data Breach

    5 February 2026

    French police investigate X office in Paris, call in Elon Musk for questioning

    4 February 2026
  • Startups

    Fundamental raises $255 million in Series A with a new approach to big data analytics

    6 February 2026

    a16z VC wants founders to stop stressing about crazy ARR numbers

    6 February 2026

    Lunar Energy raises $232 million to develop home batteries that support the grid

    5 February 2026

    Meet Gizmo: A TikTok for vibe-coded interactive mini-apps

    5 February 2026

    India’s Varaha wins $20M to scale up carbon removal from Global South

    4 February 2026
  • Transportation

    Apeiron Labs Takes $9.5M to Flood Oceans with Autonomous Underwater Robots

    5 February 2026

    Uber appoints new CFO as its AV plans accelerate

    5 February 2026

    Skyryse lands another $300 million to make flying, even helicopters, simple and safe

    4 February 2026

    China is leading the fight against hidden car door handles

    3 February 2026

    Waymo raises $16 billion to scale robotaxi fleet globally

    3 February 2026
  • Venture

    Secondary sales are shifting from founders’ windfalls to employee retention tools

    6 February 2026

    Sapiom Raises $15M to Help AI Agents Buy Their Own Tech Tools

    6 February 2026

    What a16z actually funds (and what it ignores) when it comes to AI infra

    5 February 2026

    Plans 2026: What’s Next for Startup Battlefield 200

    4 February 2026

    Minneapolis tech community holds strong in ‘tense and difficult times’

    4 February 2026
  • Recommended Essentials
TechTost
You are at:Home»AI»Gemini’s data analysis capabilities are not as good as Google claims
AI

Gemini’s data analysis capabilities are not as good as Google claims

techtost.comBy techtost.com30 June 202407 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Email
Gemini's Data Analysis Capabilities Are Not As Good As Google
Share
Facebook Twitter LinkedIn Pinterest Email

One of the selling points of Google’s flagship AI models, Gemini 1.5 Pro and 1.5 Flash, is the amount of data they can supposedly process and analyze. In press briefings and demos, Google has repeatedly claimed that the models can accomplish previously impossible tasks thanks to their “long framework,” such as summarizing multiple documents of hundreds of pages or searching through scenes in movie footage.

But new research shows that models are actually not very good at these things.

Two separate studies explored how well Google’s Gemini models and others make sense of a massive amount of data — think the length of War and Peace at work. Both find that Gemini 1.5 Pro and 1.5 Flash struggle to correctly answer questions about large datasets. in a series of document-based tests, the models gave the correct answer only 40% 50% of the time.

“While models like Gemini 1.5 Pro can technically process long contexts, we’ve seen many cases showing that the models don’t actually ‘understand’ the content,” said Marzena Karpinska, a postdoctoral researcher at UMass Amherst and co-author on one of studies, he told TechCrunch.

Gemini context window is missing

A model’s context, or context window, refers to input data (eg text) that the model examines before generating output (eg additional text). A simple question — “Who won the 2020 US presidential election?” — can serve as a frame, just like a movie script, show or sound clip. And as the context windows grow, so does the size of the documents that fit in them.

The latest versions of Gemini can take over 2 million tokens as a frame. (“Tokens” are subdivided pieces of raw data, such as the syllables “fan,” “tas,” and “tic” in the word “fantastic.”) That equates to about 1.4 million words, two hours of video, or 22 hours of audio — the largest frame of any model on the market.

In an update earlier this year, Google showed several pre-recorded demonstrations intended to illustrate the potential of Gemini’s long-range capabilities. Someone asked Gemini 1.5 Pro to search the transcript of the Apollo 11 moon landing telecast—about 402 pages—for passages that contained jokes, and then found a scene in the telecast that looked like a pencil sketch.

Google DeepMind research vice president Oriol Vinyals, who led the briefing, described the model as “magical”.

“[1.5 Pro] it performs these kinds of reasoning tasks on every page, every word,” he said.

That may have been an exaggeration.

In one of the aforementioned studies comparing these capabilities, Karpinska, along with researchers from the Allen Institute for AI and Princeton, asked the models to rate true/false statements about fiction books written in English. The researchers chose recent papers so the models couldn’t “cheat” relying on the prediction, and contained the statements with references to specific details and plot points that would be impossible to understand without reading the books in their entirety.

Given a statement like “Using her skills as an Apoth, Nusis is able to reverse the type of portal opened by the reagent key found in Rona’s wooden chest,” Gemini 1.5 Pro and 1.5 Flash — having swallowed the relevant book — they had to say whether the statement was true or false and explain their reasoning.

Image Credits: UMass Amherst

Tested on a book approximately 260,000 words long (~520 pages), the researchers found that 1.5 Pro answered true/false sentences correctly 46.7% of the time, while Flash only answered correctly 20% of the time. This means that a coin is much better at answering questions about the book than Google’s latest machine learning model. Averaging all benchmark results, no model managed to achieve better than chance in terms of question answer accuracy.

“We noticed that the models have more difficulty verifying claims that require examining larger parts of the book, or even the entire book, compared to claims that can be resolved by retrieving sentence-level evidence,” Karpinska said. “Qualitatively, we also observed that models have difficulty verifying claims about implicit information that is clear to a human reader but not explicitly stated in the text.”

The second of the two studies, co-authored by researchers at UC Santa Barbara, tested the ability of Gemini 1.5 Flash (but not 1.5 Pro) to “scan” videos — that is, search for and answer questions about the content in them.

The co-authors created a dataset of images (e.g., a photo of a birthday cake) combined with questions for the model to answer about the objects depicted in the images (e.g., “Which cartoon character is in this the cake;”). To evaluate the models, they chose one of the images at random and inserted “distractor” images before and after it to create slideshow-like footage.

Flash didn’t perform as well. In a test that had the model transcribe six handwritten digits from a “presentation” of 25 images, Flash got about 50% of the transcriptions correct. Accuracy dropped to about 30% with eight digits.

“In real-world question-answering tasks over images, it seems to be especially difficult for all the models we tested,” Michael Saxon, a PhD student at UC Santa Barbara and one of the study’s co-authors, told TechCrunch. “That little bit of reasoning—recognizing that a number is in a box and reading it—may be what breaks the model.”

Google is promising with Gemini

None of the studies are peer-reviewed, nor do they investigate versions of Gemini 1.5 Pro and 1.5 Flash with 2 million environments. (Both tested the 1M environment versions.) And Flash isn’t meant to be as capable as Pro in terms of performance. Google advertises it as a low-cost alternative.

Still, both add fuel to the fire that Google over-promised — and over-promised — with Gemini from the start. None of the models the researchers tested, including OpenAI’s GPT-4o and Anthropic’s Claude 3.5 Sonnet, performed well. However, Google is the only model provider that is given a window frame charge in its ads.

“There’s nothing wrong with simply claiming, ‘Our model can get X number of chips’ based on objective technical details,” Saxon said. “But the question is, what useful thing can you do with it?”

Genetic AI in general is coming under increased scrutiny as businesses (and investors) grow frustrated with the technology’s limitations.

In a pair of recent surveys by the Boston Consulting Group, about half of respondents — all C-suite executives — said they don’t expect genetic AI to deliver significant productivity gains and are concerned about the potential for errors and compromised data. derived from artificial intelligence generation tools. PitchBook recently mentionted that, for two consecutive quarters, early-stage AI trades have declined, plummeting 76% from their Q3 2023 peak.

Faced with meeting recap chatbots that generate imaginary details for humans and AI search platforms that basically amount to plagiarism generators, customers are looking for promising differentiators. Google – which has struggled, at times clumsily, to catch up with its AI rivals – was desperate to make Gemini’s environment one of those differentiators.

But the bet was premature, it seems.

“We haven’t come up with a way to really show that ‘reasoning’ or ‘understanding’ is taking place for large documents, and basically every group that has released these models is putting together their own ad hoc evaluators to make these claims,” ​​Karpinska said . . “Without knowing when the context processing is applied — and the companies don’t share those details — it’s hard to say how realistic these claims are.”

Google did not respond to a request for comment.

Both Saxon and Karpinska believe that the antidotes to strong claims surrounding genetic AI are better benchmarks and, in the same vein, a greater emphasis on third-party criticism. Saxon notes that one of the most common tests for long context (freely referred to by Google in its marketing materials), the “needle in the haystack,” only measures a model’s ability to retrieve specific information, such as names and numbers; from data sets — not answer complex questions about this information.

“All the scientists and most engineers who use these models basically agree that our existing reporting culture is broken,” Saxon said, “so it’s important for the public to understand to get these giant reports that contain numbers like ‘general intelligence on all benchmarks’ with a huge grain of salt.”

All included analysis capabilities claims data Exclusive Gemini Geminis Generative AI good Google
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticleTikTok’s rival on Instagram, Whee, has no traction
Next Article Why Tecovas sought physical sales first despite its DTC dreams
bhanuprakash.cg
techtost.com
  • Website

Related Posts

Substack confirms that the data breach affects users’ email addresses and phone numbers

6 February 2026

Fundamental raises $255 million in Series A with a new approach to big data analytics

6 February 2026

Amazon and Google are winning the AI ​​capital race — but what’s the prize?

6 February 2026
Add A Comment

Leave A Reply Cancel Reply

Don't Miss

Substack confirms that the data breach affects users’ email addresses and phone numbers

6 February 2026

Fundamental raises $255 million in Series A with a new approach to big data analytics

6 February 2026

Secondary sales are shifting from founders’ windfalls to employee retention tools

6 February 2026
Stay In Touch
  • Facebook
  • YouTube
  • TikTok
  • WhatsApp
  • Twitter
  • Instagram
Fintech

Stripe Alumni Raise €30M Series A for Duna, Backed by Stripe and Adyen Executives

5 February 2026

Fintech CEO and Forbes 30 Under 30 alum indicted for alleged fraud

3 February 2026

How Sequoia-backed Ethos went public while rivals lagged behind

30 January 2026
Startups

Fundamental raises $255 million in Series A with a new approach to big data analytics

a16z VC wants founders to stop stressing about crazy ARR numbers

Lunar Energy raises $232 million to develop home batteries that support the grid

© 2026 TechTost. All Rights Reserved
  • About Us
  • Contact Us
  • Privacy Policy
  • Terms and Conditions
  • Disclaimer

Type above and press Enter to search. Press Esc to cancel.