YouTube today was announced how it will approach handling AI-generated content on its platform with a series of new policies around responsible disclosure, as well as new tools to request the removal of deepfakes, among other things. The company says that while it already has policies banning fake media, AI has necessitated the creation of new policies because of the potential to mislead viewers if they don’t know the video has been “altered or synthetically created.”
One of the changes that will roll out includes creating new disclosure requirements for YouTube creators. Now, they will have to disclose when they have created altered or synthetic content that looks realistic, including videos created with artificial intelligence tools. For example, this disclosure will be used if a creator uploads a video that appears to depict an actual event that never happened, or shows someone saying something they never said or doing something they never did.
Image Credits: YouTube
It’s worth noting that this disclosure is limited to content that “looks realistic” and is not a blanket disclosure requirement for all AI-generated synthetic videos.
“We want viewers to have context when viewing realistic content, including the use of artificial intelligence tools or other synthetic changes to create it,” YouTube spokesman Jack Malon told TechCrunch. “This is especially important when the content discusses sensitive issues, such as elections or ongoing conflicts,” he noted.


Image Credits: YouTube
AI-generated content is an area in which YouTube itself is involved. The company announced in September that it is preparing to launch a new AI creation feature called Dream Screen early next year that will allow YouTube users to create an AI-generated video or image background by typing what they want to see. All YouTube AI products and features created will automatically be flagged as modified or synthetic, we’ve been told.


Image Credits: YouTube
The company also warns that creators who fail to properly disclose their use of AI consistently will be subject to “content removal, suspension from the YouTube Partner Program or other penalties.” YouTube says it will work with creators to make sure they understand the requirements before they go live. But it notes that some AI content, even if labeled, can be removed if it’s used to show “realistic violence” if the goal is to shock or disgust viewers. This seems like a timely review, given that deepfakes have already been used to confuse the world about the Israel-Hamas war.
YouTube’s warning of punitive action, however, follows a recent relaxation of the strike policy. In late August, the company announced it was giving creators new ways to eliminate their strikes before they turn into strikes that could lead to their channel being taken down. The changes could allow creators to get away with carefully breaking YouTube’s rules by specifying when they post offending content — as they can now complete a training course to remove their warnings. For someone determined to post unsanctioned content, they now know they can take that risk without losing their channel entirely.
If YouTube takes a softer stance on AI, also allowing creators to make “mistakes” and then come back to post more videos, the damage in terms of spreading misinformation could become a problem. The company is also unclear on how “consistently” the AI disclosure rules would have to be violated before taking punitive action.
Other changes include the ability for any YouTube user to request the removal of AI-generated content or other synthetic or modified content that simulates an identifiable person — also known as a deepfake — including their face or voice. However, the company clarifies that not all flagged content will be removed, leaving room for parody or satire. It also says it will consider whether the person requesting the takedown can be uniquely identified or if the video involves a public official or other known person, in which case “there may be a higher bar,” YouTube says.
Alongside the deepfake request removal tool, the company is introducing a new feature that will allow music partners to request the removal of AI-generated music that mimics an artist’s singing or rapping voice. YouTube has said it’s developing a system that will eventually compensate artists and rights holders for AI music, so this seems like an interim step that would simply allow content to be taken down in the meantime. YouTube will do some thinking here as well, noting that content that is news reporting, analysis or criticism of synthetic voices may be allowed to remain online. The content removal system will also only be available to labels and distributors that represent artists participating in YouTube’s AI experiments.
AI is used in other areas of YouTube’s business, including enhancing the work of its 20,000 content reviewers worldwide and identifying new ways abuse and threats are emerging, the statement noted. The company says it understands bad actors will try to circumvent its rules and will evolve its protections and policies based on user feedback.
“We’re still at the beginning of our journey to unlock new forms of innovation and creativity on YouTube by building artificial intelligence. We’re extremely excited about the potential of this technology and know what’s next will resonate across the creative industries for years to come,” said the YouTube blog post, co-authored by VP of Product Management Jennifer Flannery O’ Connor and Emily. Moxley. “We are taking the time to balance these benefits with ensuring the continued safety of our community at this critical time—and will work hand-in-hand with creators, artists and others across the creative industries to build a future that benefits everyone we.”
