YouTube is update its harassment and cyberbullying policies to limit content that “realistically simulates” dead minors or victims of fatal or violent incidents that describe their deaths. The Google-owned platform says it will start serving such content from January 16.
The policy change comes as some real crime creators did using artificial intelligence to recreate the likeness of dead or missing children. In these disturbing cases, people are using artificial intelligence to give these child victims of high-profile cases a childlike “voice” to describe their deaths.
In recent months, content creators have used AI to tell numerous high-profile cases, including the kidnapping and death of British two-year-old James Bulger, as reported by Washington Post. There are also similar AI narratives for Madeleine McCann, a British three-year-old who disappeared from a resort, and Gabriel Fernández, an eight-year-old boy who was was tortured and was murdered by his mother and her boyfriend in California.
YouTube will remove content that violates the new policies, and users who receive a warning will not be able to upload videos, live streams, or stories for one week. After three warnings, the user’s channel will be permanently removed from Youtube.
The new changes come nearly two months after YouTube introduced new policies on responsible disclosure of AI content, along with new tools to request the removal of deepfakes. One of the changes requires users to disclose when they have created modified or synthetic content that appears realistic. The company warned that users who failed to properly disclose their use of AI would be subject to “content removal, suspension from the YouTube Partner Program or other penalties.”
Additionally, YouTube noted at the time that some AI content can be removed if it’s used to show “realistic violence,” even if it’s flagged.
In September 2023, TikTok launched a tool that allows creators to flag their AI-generated content after the social networking app updated its guidelines to require creators to disclose when they post synthetic or manipulated media that displays realistic scenes. TikTok’s policy allows it to remove realistic AI images that are not disclosed.