Google I/O 2025, the largest conference of the Schedule of the year of Google, takes place on Tuesday and Wednesday at the Coastline Amphitheater in Mountain View. We are on the ground that brings you the latest updates from the event.
I/O presents products from Google’s entire portfolio. We have a lot of news about Android, Chrome, Google Search, YouTube and-of course-Google’s Chatbot, Gemini.
Google has hosted a separate event dedicated to Android: The Android Show. The company has announced new ways to find lost Android phones and other objects, additional device features for the advanced protection program, safety tools for fraud and theft and a new design language called Material 3 Expressive.
Here are all the things announced on Google I/O 2025.
Twin ultra
Gemini Ultra (USA only in the US for now) provides the “highest level of access” to AI applications and services, according to Google. It is priced at $ 249.99 a month and includes Google’s VEO 3 video generator, the company’s new video editing application and a strong AI capacity called Gemini 2.5 Pro Deep Think Mode, which has not yet begun.
The AI Ultra comes with higher limits on Google’s NotebookLm platform and hit the company’s remixing image app. AI Ultra subscribers also have access to Google’s Gemini Chatbot in Chrome. Some “Agentic” tools supported by the company’s Project Mariner Tech. Premium youtube? and 30TB of storage on Google Drive, Google Photos and Gmail.
Deeply think about Gemini 2.5 Pro
Deep Think is an “improved” reasoning function for the Google Gemini 2.5 Pro model. It allows the model to examine multiple answers to questions before answering, reinforcing its performance at certain points of reference.
Google did not go into details about Deep Think Works, but it could be similar to O1-Pro models and upcoming O3-PRO models, which probably use an engine to search and composing the best solution to a given problem.
Deep Think is available on “reliable testers” via API Gemini. Google said it is taking extra time to hold security ratings before unfolding widely.
AI model that creates 3 videos
Google claims that VEO 3 can create sound effects, background noises and even dialogue to accompany the videos it creates. VEO 3 also improves its predecessor, VEO 2, in terms of quality material it can create, Google says.
VEO 3 is available from Tuesday to Google’s Gemini Chatbot app for Google’s AI Ultra Subscribers, where it can be triggered by text or image.
Generator Imagen 4 ai
According to Google, Imagen 4 is faster – faster than Imagen 3 and will soon be faster. In the near future, Google is planning to release an Imagen 4 variant that is up to 10x faster than Imagen 3.
Imagen 4 is capable of making “subtle details” such as fabrics, water droplets and animal fur, according to Google. It can handle both photorealist and abstract styles, creating images in a range of proportions and resolution up to 2K.
Both VEO 3 and Imagen 4 will be used to feed the flow, the video tool powered by AI of a cinema -oriented company.
Gemini Applications Updates
Google has announced that Gemini applications have more than 400 monthly active users.
Gemini Live’s camera and sharing capabilities will be released this week to all users on iOS and Android. The feature, fueled by the project Astra, allows people to have almost real oral conversations with Gemini, while flowing videos from the camera or smartphone display on Model AI.
Google says that Gemini Live will also begin to integrate deeper with its other applications in the coming weeks: soon it will be able to provide instructions from Google Maps, create events in the Google calendar and make lists of Google tasks.
Google says it informs the deep research, Gemini’s AI agent who creates detailed research reports, allowing users to upload their own private PDFs and images.
Stitch
Stitch is a tool that is powered by AI to help people design the front and mobile application front by creating the necessary UI and code elements. Stitch may be asked to create a UI app in short or even a picture, providing HTML and CSS marking for the plans it creates.
Stitch is a bit more limited to what it can do compared to some other Vibe encoding products, but there is a fair amount of adjustment options.
Google has also expanded access to Jules, the AI agent who aims to help developers correct code errors. The tool helps developers to understand the complex code, create attraction in gitHub and handle some delay objects and programming work.
Mariner Project
The Project Mariner is Google’s AI experimental agent who browsed and uses sites. Google says it has significantly informed how the Project Mariner operates, allowing the agent to take on almost twelve duties at a time and now flows on users.
For example, users of the Project Mariner can buy tickets in a baseball game or buy groceries online without ever visiting a third -party website. People can simply chat with Google’s AI agent and visit sites and take actions for them.
Astra project
Google’s low delay, AI multimodal experience, project Astra, will supply a number of new experiences in search, the Gemini AI application and products from third parties.
Project Astra was born from Google Deepmind as a way to present almost real -time, multimodal AI capabilities. The company says it is now building these Astra Project glasses with partners, such as Samsung and Warby Parker, but the company does not have a set start date.


AI mode
Google releases AI feature, Google Experimental Search feature that allows people to make complex, multilevel questions through an AI interface to US users this week.
The AI feature will support the use of complex data in sports and funding questions and will offer “Try” options for clothing. Live search, which is later released this summer, will allow you to ask questions based on what your phone camera sees in real time.
Gmail is the first application supported by a personalized box.
3D conference beam
Beam, previously called Starline, uses a combination of software and hardware, including a series of six cameras and the custom display of the light field, to let a user chat with someone as if they were in the same meeting room. An AI model converts video from the cameras, which are located at different angles and are marked to the user in 3D performance.
The Google beam has “almost perfect” head monitoring at kilometers and 60fps video flow. When used with Google Meet, Beam provides a real -time speech translation feature that maintains the voice, tone and expressions of the original speaker.
And talking about Google Meet, Google announced that Meet is getting a real -time speech translation.
More AI updates
Google starts Gemini at Chrome, which will give people access to a new AI browser who will help them quickly understand the frame of a page and do tasks.
Gemma 3N is a model designed to run “smoothly” on phones, laptops and tablets. Is available in preview from Tuesday. It can handle sound, text, images and videos, according to Google.
The company also announced a ton of AI workplace features coming to Gmail, Google Docs and Google Vids. Specifically, Gmail gets personalized smart answers and a new inbox cleaning feature, while VIDS is taking new ways to create and process content.
Video reviews come to Notebooklm and the company has released Synthid detector, a verification gate that uses Google Watermarking Technology to help detect content produced by AI. Lyria Realtime, the AI model who authorizes the experimental application of music production, is now available through an API.
Wear OS 6
The Wear OS 6 brings a consolidated font on tiles for a cleaner application appearance and Pixel watches get a dynamic theme that synchronizes the colors of the application with watch faces.
The main promise of the new design platform is to allow developers to create better adaptation to applications along with seamless transitions. The company releases a design line of design for developers together with Figma design files.


Google Play
Google enhances the Play Store for Android developers with fresh tools to handle subscriptions, theme pages so that users can dive into specific interests, sound samples to give the peoples a hidden look at the content of the app and a new checkout experience to make add-ons.
The “browse” pages for movies and broadcasts (we only for now) will connect users to applications that are connected to the tones of broadcasts and movies. In addition, developers take dedicated pages for tests and releases and tools to monitor and improve the development of their applications. Developers that use Google can also stop living releases of live applications if a critical problem occurs.
Subscription management tools also receive upgrade with a multi -product fund. Devs will soon be able to offer subscription additives alongside the main subscriptions, all under one payment.
Android Studio
Android Studio incorporates new AI features, including “Journeys”, a “Agentic AI” capacity that coincides with the release of the Gemini 2.5 Pro model. And an “agent function” will be able to handle more growing development processes.
Android Studio will receive new AI features, including an improved “Crash Insights” features on the Insights Quality Insights table. This improvement, powered by Gemini, will analyze the source code of an application to identify possible causes of conflict and propose corrections.
