One of the selling points of Google’s flagship AI models, Gemini 1.5 Pro and 1.5 Flash, is the amount of data they can supposedly process and analyze. In press briefings and demos, Google has repeatedly claimed that the models can accomplish previously impossible tasks thanks to their “long framework,” such as summarizing multiple documents of hundreds of pages or searching through scenes in movie footage.
But new research shows that models are actually not very good at these things.
Two separate studies explored how well Google’s Gemini models and others make sense of a massive amount of data — think the length of War and Peace at work. Both find that Gemini 1.5 Pro and 1.5 Flash struggle to correctly answer questions about large datasets. in a series of document-based tests, the models gave the correct answer only 40% 50% of the time.
“While models like Gemini 1.5 Pro can technically process long contexts, we’ve seen many cases showing that the models don’t actually ‘understand’ the content,” said Marzena Karpinska, a postdoctoral researcher at UMass Amherst and co-author on one of studies, he told TechCrunch.
Gemini context window is missing
A model’s context, or context window, refers to input data (eg text) that the model examines before generating output (eg additional text). A simple question — “Who won the 2020 US presidential election?” — can serve as a frame, just like a movie script, show or sound clip. And as the context windows grow, so does the size of the documents that fit in them.
The latest versions of Gemini can take over 2 million tokens as a frame. (“Tokens” are subdivided pieces of raw data, such as the syllables “fan,” “tas,” and “tic” in the word “fantastic.”) That equates to about 1.4 million words, two hours of video, or 22 hours of audio — the largest frame of any model on the market.
In an update earlier this year, Google showed several pre-recorded demonstrations intended to illustrate the potential of Gemini’s long-range capabilities. Someone asked Gemini 1.5 Pro to search the transcript of the Apollo 11 moon landing telecast—about 402 pages—for passages that contained jokes, and then found a scene in the telecast that looked like a pencil sketch.
Google DeepMind research vice president Oriol Vinyals, who led the briefing, described the model as “magical”.
“[1.5 Pro] it performs these kinds of reasoning tasks on every page, every word,” he said.
That may have been an exaggeration.
In one of the aforementioned studies comparing these capabilities, Karpinska, along with researchers from the Allen Institute for AI and Princeton, asked the models to rate true/false statements about fiction books written in English. The researchers chose recent papers so the models couldn’t “cheat” relying on the prediction, and contained the statements with references to specific details and plot points that would be impossible to understand without reading the books in their entirety.
Given a statement like “Using her skills as an Apoth, Nusis is able to reverse the type of portal opened by the reagent key found in Rona’s wooden chest,” Gemini 1.5 Pro and 1.5 Flash — having swallowed the relevant book — they had to say whether the statement was true or false and explain their reasoning.
Tested on a book approximately 260,000 words long (~520 pages), the researchers found that 1.5 Pro answered true/false sentences correctly 46.7% of the time, while Flash only answered correctly 20% of the time. This means that a coin is much better at answering questions about the book than Google’s latest machine learning model. Averaging all benchmark results, no model managed to achieve better than chance in terms of question answer accuracy.
“We noticed that the models have more difficulty verifying claims that require examining larger parts of the book, or even the entire book, compared to claims that can be resolved by retrieving sentence-level evidence,” Karpinska said. “Qualitatively, we also observed that models have difficulty verifying claims about implicit information that is clear to a human reader but not explicitly stated in the text.”
The second of the two studies, co-authored by researchers at UC Santa Barbara, tested the ability of Gemini 1.5 Flash (but not 1.5 Pro) to “scan” videos — that is, search for and answer questions about the content in them.
The co-authors created a dataset of images (e.g., a photo of a birthday cake) combined with questions for the model to answer about the objects depicted in the images (e.g., “Which cartoon character is in this the cake;”). To evaluate the models, they chose one of the images at random and inserted “distractor” images before and after it to create slideshow-like footage.
Flash didn’t perform as well. In a test that had the model transcribe six handwritten digits from a “presentation” of 25 images, Flash got about 50% of the transcriptions correct. Accuracy dropped to about 30% with eight digits.
“In real-world question-answering tasks over images, it seems to be especially difficult for all the models we tested,” Michael Saxon, a PhD student at UC Santa Barbara and one of the study’s co-authors, told TechCrunch. “That little bit of reasoning—recognizing that a number is in a box and reading it—may be what breaks the model.”
Google is promising with Gemini
None of the studies are peer-reviewed, nor do they investigate versions of Gemini 1.5 Pro and 1.5 Flash with 2 million environments. (Both tested the 1M environment versions.) And Flash isn’t meant to be as capable as Pro in terms of performance. Google advertises it as a low-cost alternative.
Still, both add fuel to the fire that Google over-promised — and over-promised — with Gemini from the start. None of the models the researchers tested, including OpenAI’s GPT-4o and Anthropic’s Claude 3.5 Sonnet, performed well. However, Google is the only model provider that is given a window frame charge in its ads.
“There’s nothing wrong with simply claiming, ‘Our model can get X number of chips’ based on objective technical details,” Saxon said. “But the question is, what useful thing can you do with it?”
Genetic AI in general is coming under increased scrutiny as businesses (and investors) grow frustrated with the technology’s limitations.
In a pair of recent surveys by the Boston Consulting Group, about half of respondents — all C-suite executives — said they don’t expect genetic AI to deliver significant productivity gains and are concerned about the potential for errors and compromised data. derived from artificial intelligence generation tools. PitchBook recently mentionted that, for two consecutive quarters, early-stage AI trades have declined, plummeting 76% from their Q3 2023 peak.
Faced with meeting recap chatbots that generate imaginary details for humans and AI search platforms that basically amount to plagiarism generators, customers are looking for promising differentiators. Google – which has struggled, at times clumsily, to catch up with its AI rivals – was desperate to make Gemini’s environment one of those differentiators.
But the bet was premature, it seems.
“We haven’t come up with a way to really show that ‘reasoning’ or ‘understanding’ is taking place for large documents, and basically every group that has released these models is putting together their own ad hoc evaluators to make these claims,” Karpinska said . . “Without knowing when the context processing is applied — and the companies don’t share those details — it’s hard to say how realistic these claims are.”
Google did not respond to a request for comment.
Both Saxon and Karpinska believe that the antidotes to strong claims surrounding genetic AI are better benchmarks and, in the same vein, a greater emphasis on third-party criticism. Saxon notes that one of the most common tests for long context (freely referred to by Google in its marketing materials), the “needle in the haystack,” only measures a model’s ability to retrieve specific information, such as names and numbers; from data sets — not answer complex questions about this information.
“All the scientists and most engineers who use these models basically agree that our existing reporting culture is broken,” Saxon said, “so it’s important for the public to understand to get these giant reports that contain numbers like ‘general intelligence on all benchmarks’ with a huge grain of salt.”