Malicious actors are abusing AI-powered music tools to create homophobic, racist and propaganda songs — and posting guides instructing others how to do it.
According ActiveFence, a service to manage trust and security functions on online platforms, since March there has been a surge of chatter in hate speech communities about ways AI music-making tools are being misused to write offensive songs targeting minority groups. AI-generated songs shared on these forums and message boards aim to incite hatred against ethnic, racial and religious groups, ActiveFence researchers say in a report, while celebrating acts of martyrdom, self-harm and terrorism .
Hated and harmful songs are hardly a new phenomenon. But the fear is that, with the advent of easy-to-use free music-making tools, they will be built on a large scale by people who previously lacked the means or expertise — just as image, voice, video, and text generators have hurry up the spread of misinformation, misinformation and hate speech;
“These are trends that are intensifying as more users learn how to create these songs and share them with others,” said Noam Schwartz, co-founder and CEO of ActiveFence, in an interview with TechCrunch. “Threat actors are quickly identifying specific vulnerabilities to abuse these platforms in different ways and create malicious content.”
Making “hate” songs.
Creative AI music tools like Udio and Suno allow users to add custom lyrics to generated songs. Safeguards on the platforms filter out common insults and pejoratives, but users have found workarounds, according to ActiveFence.
In one example cited in the report, users on white supremacist forums shared phonetic spellings of minorities and offensive terms, such as “jooz” instead of “Jews” and “say tan” instead of “Satan,” which they used to bypass content filters . Some users suggested changing the spacing and spelling when referring to acts of violence, such as replacing “rape me” with “monkey”.
TechCrunch tested several of these solutions on Udio and Suno, two of the most popular tools for creating and sharing AI-generated music. Suno let them all pass, while Udio blocked some—but not all—of the attacking harmonies.
A spokesperson for Udio told TechCrunch that the company prohibits the use of its platform for hate speech. Suno did not respond to our request for comment.
In the communities it investigated, ActiveFence found links to AI-generated songs parroting conspiracy theories about Jews and advocating their mass murder. Songs containing slogans related to the terrorist groups ISIS and Al-Qaeda. and songs glorifying sexual violence against women.
The effect of the song
Schwartz argues that songs—unlike, say, text—carry emotional weight that makes them a powerful force for hate groups and political war. He points to Rock Against Communism, the series of white-power rock concerts in the UK in the late 70s and early 80s that spawned entire subgenres of anti-Semitic and racist “hatcore” MUSIC.
“Artificial intelligence makes harmful content more appealing – think of someone preaching a harmful narrative to a certain population, then imagine someone creating a rhyming song that makes it easy for everyone to sing and remember,” he said. “They enhance group solidarity, indoctrinate members of peripheral groups, and are also used to shock and offend unaffiliated Internet users.”
Schwartz calls on music production platforms to implement prevention tools and conduct more extensive security assessments. “Red teaming may show some of these vulnerabilities and can be done by simulating the behavior of threat actors,” Schwartz said. “Better moderation of input and output can also be helpful in this case, as it will allow platforms to block content before it is shared with the user.”
However, fixes could prove fleeting as users discover new methods that destroy moderation. Some of the AI-generated terrorist propaganda songs that ActiveFence detected, for example, were created using euphemisms and transliterations in the Arabic language—euphemisms that the music producers didn’t pick up on, possibly because their filters aren’t robust to Arabic.
AI-generated hate music is poised to go viral if it follows in the footsteps of other AI-generated media. Wired documented earlier this year how an AI-manipulated clip of Adolf Hitler garnered more than 15 million views on X after it was shared by a far-right conspiracy influencer.
Among other experts, a UN advisory body has expressed worries that racist, anti-Semitic, Islamophobic and xenophobic content could be supercharged by genetic artificial intelligence.
“Productive AI services allow users who lack resources or creative and technical skills to create engaging content and spread ideas that can compete for attention in the global marketplace of ideas,” Schwartz said. “And threat actors, having discovered the creative potential these new services offer, are working to bypass moderation and avoid detection – and they’ve been successful.”