On Thursday, weeks after the start of AI’s most powerful model, Gemini 2.5 Pro, Google published one technical report showing the results of internal security assessments. However, the reference is lightweight to the details, experts say, making it difficult to determine what risks the model can put.
Technical reports provide useful – and uninterrupted, sometimes – information that companies do not always advertise widely for their AI. In general, the AI community sees these reports as attempts to support independent research and security assessments.
Google adopts a different security reference approach than some of AI’s opponents, publishing technical reports only when it considers a model that has graduated from the “experimental” stage. The company also does not include findings from all “dangerous capacity” ratings in these records. Maintains these for separate control.
Several experts spoken by TechCrunch continued to be frustrated by the ataxia of the Gemini 2.5 Pro report, which they noted that they did not report Google’s Frontier Security Frame (FSF). Google introduced the FSF last year in what it described as an attempt to identify the future AI capabilities that could cause “serious damage”.
“This [report] It is very sparse, contains little information and came out weeks after the fact that the model was already available to the public, “Peter Wildeford, co -founder of the AI Institute of Political and Strategy, told TechCrunch.
Thomas Woodside, co -founder of the Secure AI Project, said that while it is happy that Google has published a report on Gemini 2.5 Pro, he is not convinced of the company’s commitment to provide timely additional security assessments. Woodside pointed out that the last time Google published the results of dangerous abilities was in June 2024 – for a model announced in February of that year.
It does not inspire much confidence, Google has not made a report on the Gemini 2.5 flash, a smaller, most effective model announced by the company last week. A spokesman told TechCrunch that a report on flash is “soon”.
“I hope this is a promise from Google to start posting more frequent updates,” Woodside told TechCrunch. “These updates should include the results of evaluations for models that have not yet been publicly developed, as these models could also create serious risks.”
Google may have been one of the first AI workshops to propose standard models for models, but it is not the only one accused of degrading transparency recently. Meta released a Similarly, security assessment From the new Llama 4 Open models and Openai chose not to post any reports on the GPT-4.1 series.
Hanging over Google’s head assureing the technological giant made to the regulators to maintain a high level of security and reference tests. Two years ago, Google told US government He will publish security reports on all “important” AI Public Models “. The company followed this promise with similar commitments to other countriescommitting to “providing public transparency” around AI products.
Kevin Bankston, a senior adviser to AI’s governance at the Center for Republic and Technology, called Sporadic and vaguely trendy, mentions a “breed down” in the security of AI.
“Combined with reports that competing laboratories such as Openai have shaved security testing time before release from months to days, this meager documentation for Google’s leading AI model reports a worrying story about a race at the bottom of the security and transparency of AI.
Google said in statements that, although not described in detail in its technical reports, it is conducting safety tests and “contradictory red grouping” for models before release.
