Meta isn’t the only company grappling with the rise of AI-generated content and how it’s affecting its platform. YouTube also quietly rolled out a policy change in June that will allow users to request the removal of AI-generated content or other synthetic content that simulates their face or voice. The change allows users to request the removal of this type of AI content as part of YouTube’s privacy request process. It is an extension of the previous one approach to responsible AI agenda announced first introduced in November.
Instead of asking for content to be taken down because it’s misleading, like a deepfake, YouTube wants affected parties to ask for content to be taken down directly as a privacy violation. According to recently updated YouTube Help documentation subject matter requires first-party claims outside of certain exceptions, such as when the affected individual is a minor, does not have access to a computer, is deceased, or other similar exceptions.
However, simply submitting the takedown request does not necessarily mean that the content will be removed. YouTube warns that it will judge its own complaint based on several factors.
For example, it may consider whether the content reveals itself to be synthetic or AI-engineered, whether it uniquely identifies an individual, and whether the content could be considered parody, satire or something else worthwhile and in the public interest. The company further notes that it can consider whether the AI content includes a public figure or other well-known person and shows them engaging in “sensitive behavior” such as criminal activity, violence, or endorsing a product or political candidate. The latter is especially troubling in an election year where AI-generated endorsements could potentially sway votes.
YouTube says it will also give the uploader 48 hours to act on the complaint. If the content is removed before this time, the complaint is closed. Otherwise, YouTube will initiate a review. The company also warns users that removal means removing the video from the site entirely and, if applicable, removing the person’s name and personal information from the video’s title, description and tags as well. Users can also blur the faces of people in their videos, but they can’t just make the video private to comply with the takedown request, as the video could revert to public at any time.
However, the company did not widely publicize the policy change in March introduced a tool in Creator Studio that allowed creators to reveal when realistic content was created by modified or synthetic means, including genetic artificial intelligence. Also more recently started a test of a feature which would allow users to add crowdsourced notes that provide additional context in a video, such as if it is intended to be a parody or if it is misleading in some way.
YouTube is not against the use of AI, as it has already experimented with AI itself, including a comment summary and chat tool for asking questions about a video or getting recommendations. However, the company has forewarned that simply flagging AI content as such will not necessarily protect it from removal, as it will need to comply with YouTube’s Community Guidelines.
In the case of privacy complaints about AI material, YouTube will not penalize the original content creator.
“For creators, if you receive a privacy complaint notice, please note that privacy violations are separate from Community Guidelines warnings, and receiving a privacy complaint will not automatically result in a warning,” a company spokesperson said last month shared on the YouTube Community site, where the company informs creators directly about new policies and features.
In other words, YouTube’s Privacy Guidelines they differ from her Community Guidelines, and some content may be removed from YouTube as a result of a privacy request, even if it does not violate the Community Guidelines. While the company won’t impose penalties, such as an upload limit, when a creator’s video is removed after a privacy complaint, YouTube tells us it may take action against accounts with repeat violations.
Updated, 1/7/24, 4:17 p.m. ET with more information about what actions YouTube can take for privacy violations.