Creative music AI tool ProducerAI will become part of Google Labs was announced on Tuesday.
Backed by The Chainsmokers, the ProducerAI platform allows users to write natural language requests – something like “make a lofi beat” – to create music. It uses Google DeepMind’s Lyria 3 music production model, which can convert text and even image inputs into audio outputs.
Google announced last week that Lyria 3’s capabilities would be introduced in its flagship Gemini app, but ProducerAI enables users to interact with the AI model more like a “partner,” to use the words of Elias Roman, senior director of Product Management at Google Labs.
“ProducerAI has allowed me to create in new ways,” Roman wrote in the blog post. “I’ve experimented with combinations of new genres, expressed how I feel with personalized birthday songs for loved ones, and made custom workout soundtracks for me and my friends.”
Google also shared that three-time Grammy-winning rapper Wyclef Jean used the Lyria 3 model and Google’s Music AI Sandbox in his recent song “Return from Abu Dhabi.”
“It’s not just a machine where you click a button a hundred times and then you’re done. It’s a careful kind of curation where you go through and say, ‘Oh, I think this is something we can use,'” Jeff Chang, director of Product Management at Google DeepMind, said in a video the company took out.
Jean recalls wanting to know what a flute would sound like on a track he already recorded, and was able to use Google tools to quickly add a flute sound to the mix.
Techcrunch event
Boston, MA
|
June 9, 2026
“What I want everyone to understand […] you are at the time when man should be the most creative,” Jean said in the video. “There is one thing you have over artificial intelligence: a soul. And there’s one thing AI has over you: infinite information.”
AI in the music industry
Some musicians have strongly objected to the use of AI tools in the music-making process, as it is almost a foregone conclusion that an AI production tool was trained on copyrighted data from artists without their consent. Hundreds of musicians, including stars such as Billie Eilish, Katy Perry and Jon Bon Jovi, signed an open letter in 2024 calling on tech companies not to undermine human creativity with AI music production tools.
A group of music publishers recently sued AI company Anthropic for $3 billion, claiming the company illegally downloaded more than 20,000 copyrighted songs, including sheet music, song lyrics and musical compositions. (Anthropic had already been ordered by a court to offer a $1.5 billion settlement to authors whose AI education books were pirated.)
Other artists, however, have embraced the potential of this technology as a way to improve sound quality, rather than as a creative aid.
Paul McCartney used AI-powered noise reduction systems — the kind of technology that lets Zoom or FaceTime block out unwanted background noises in your video calls — to clean up a decades-old, low-quality John Lennon demo. The resulting “new” Beatles track, “Now and Then,” won a Grammy in 2025.
Meanwhile, AI music production tools like Suno have created synthesized music that sounds real enough to top the charts on Spotify and Billboard. Telisha Jones, a 31-year-old in Mississippi, used Suno to turn her (supposedly instrumental) poetry into the viral R&B song “How was I supposed to know” and signed a record deal with Hallwood Media in a reportedly lucrative deal 3 million dollars.
The law remains unclear on the legality of using copyrighted works as educational data — a federal judge, William Alsup, ruled last year that education on copyrighted data is legal, but piracy is not.
