Google brings Gemini, its genetic AI, to all cars that support Android Auto in the coming months, the company announced In its appearance on Android in front of the Company’s 2025 I/O Conference.
The company says it adds the functionality of Gemini to Android Auto and later this year in cars running Google’s built-in operating system will make driving “more productive-and fun” in blog.
“This will really be, we think, one of the biggest transformations in the experience of the vehicle we’ve seen in a long time,” said Patrick Brady, Android vice president for cars during a virtual briefing with media members in front of the conference.
Gemini will appear in Android Auto experience in two main ways.
Gemini will act as a much stronger smart voice assistant. Drivers (or passengers-Brady said they are not vocal matches to anyone who holds the phone running the Android Auto experience) will be able to ask Gemini to send texts, play music and do all the things that Google’s assistant was already able to do. The difference is that users should not be as robotic with their commands thanks to the capabilities of the Gemini’s natural language.
Gemini can also “remember” things like if a contact prefers to receive text messages in a particular language and handle this translation for the user. And Google claims that Gemini will be able to make one of the most commonly prepared Tech Demos in the car: finding good restaurants along a scheduled route. Of course, Brady said Gemini would be able to evolve google entries and revisions to respond to more specific requests (such as “Taco Places with Vegan options”).
The other main way that Gemini will be surface with what Google calls “Gemini Live”, which is a choice where digital AI is essentially always listening and ready to participate in complete conversations for … anything. Brady said that these conversations could be for everything, from the travel ideas for the Spring Break, to brainstorming recipes that a 10 -year -old would want in “Roman History”.
If all of this sounds a little bit of attention, Brady said that Google believes it will not be. He claimed that the potential of the natural language would make it easier for Android Auto to do specific tasks with less disruption and therefore the Gemini would “reduce the cognitive load”.
It is a bold claim to make a time when people are shouting for car companies to get away from touch screens and bring back natural buttons and buttons – a request many of these companies are starting to force.
There is still a lot to be settled. For the time being, Gemini will take advantage of Google’s cloud processing to work both on Android Auto and in Google -built cars. But Brady said Google is working with automakers “to build more calculation so that [Gemnini] It can run aside “, which would help not only with performance but reliably – a provocative factor in a moving vehicle that can be mournful in new cell towers every few minutes.
Modern cars also produce a lot of data from the embedded vessel sensors and in some models, even internal and external cameras. Brady said that Google has nothing to say about whether Gemini could use this multi -model data and that we are “talking about it”.
“We certainly believe that cars have more and more cameras. There are some really interesting cases of use in the future here,” he said.
Gemini on Android Auto and Google Bust-in will come to all countries that already have access to the company’s genetic model and will support more than 40 languages.
Check out how to watch Livestream and much more than Google I/O.
