Taylor Swift’s non-consensual deepfake porn went viral on X this week, with one post garnering more than 45 million views, 24,000 reposts and hundreds of thousands of likes before it was removed.
The pop star has one of the most dedicated, highly online and incomprehensibly huge fanbases in the world. Now, the Swifties are out for blood.
When mega-fandoms organize, they are capable of huge things, like when K-pop fans booked hundreds of tickets at a Donald Trump rally in an attempt to reduce attendance. As the 2024 US presidential election approaches, some experts have was theorized for power of Swifties as a voting block.
But today isn’t election day, and Swifties are focused on something more immediate: making the musician’s non-consensual deepfakes as difficult as possible. Now, when you search for terms like “taylor swift ai” or “taylor swift deepfake” on X, you’ll find thousands of posts from fans trying to bury AI-generated content. On X, the phrase “PROTECT TAYLOR SWIFT” is trending with over 36,000 posts.
Sometimes, these fan-based campaigns can cross the line. While some fans are encouraging each other to diss the X users who released the deepfakes, others are concerned about combating harassment with more harassment, especially when the suspected perpetrator has a relatively common name and in some cases, Swifties may be wrong guy. With so many thousands of fans joining the cause, it’s inevitable that not all Swifties will be part of the same united front — and some are more in touch with the “Fame” era than others.
With the rise of accessible artificial intelligence tools, this harassment tactic has become so widespread that last year, the FBI and international law enforcement agencies issued a joint statement about the threat of the campaign. According research by cybersecurity firm Deeptrace, about 96% of deepfakes are pornographic and almost always feature women.
“Fake pornography is a phenomenon that exclusively targets and harms women,” the report says. This abuse still has leaked out in schools, where underage girls have been targeted by their classmates with clear, non-consensual deepfakes. So for some Taylor Swift fans, this isn’t just a matter of protecting the star. They realize that these attacks can happen to anyone, not just celebrities, and that they must fight to set the precedent that this behavior is intolerable.
“She’s taking the hit for us right now, y’all,” a TikTok user named LeAnn said in one video urging users to defend Swift. “By protecting her, you will protect yourself and your daughters.”
According 404 Media, the images came from a Telegram chat dedicated to creating non-consensual, clear images of women using genetic artificial intelligence. The group directs its users to create deep AI mockups in Microsoft’s Designer. although this kind of content infringes Microsoft Policyits AI is still able to create it, and users have created simple workarounds to bypass basic security tools.
Microsoft and X did not respond to a request for comment before publication.
Congress is making some legislative progress to criminalize non-consensual forgery. Virginia has banned deepfake revenge porn, and Rep. Yvette Clarke (D-NY) recently reintroduced the DEEPFAKES Accountability Act, which she first proposed in 2019. While critics worry about the difficulty of legislating in the dark corners of the web, some say the bill could at least set some legal precedent to protect against this abuse. Swift’s fans also called attention to the failings of Ticketmaster, the entertainment company that also owns Live Nation. In a particularly memorable statement, FTC chairwoman Lina Khan said last year that the disastrous ticketing experience for Swift’s Eras tour “ended up turning more Gen Z-ers into antitrust than anything else I could have done.”
This abuse campaign is emblematic of the problems with the rapid rise of artificial intelligence: companies are building too quickly to properly assess the risks of the products they ship. So maybe Taylor Swift fans will take the fight to carefully regulate rapidly developing AI products – but if it takes a mass harassment campaign against a celebrity for botched AI models to face any sort of scrutiny, then that’s a whole other story. problem.