Meta has announced changes to its rules for AI-generated content and manipulated media following criticism from its Supervisory Board. Starting next month, the company said, it will flag a wider range of such content, including applying the “Made with AI” badge to deepfakes. Additional contextual information may appear when content has been manipulated in other ways that pose a high risk of misleading the public on an important issue.
The move could lead to the social networking giant flagging more pieces of content that have the potential to be misleading — important in a year of many elections taking place around the world. However, for deepfakes, Meta is only going to apply tags where the content in question has “industry AI image markers” or where the user has disclosed the AI-generated content.
AI-generated content that falls outside these limits will likely escape untagged.
The policy change is also likely to lead to more AI-generated and manipulated content within Meta’s platforms, as it moves towards an approach focused on “providing transparency and additional context” as a “better way to deal with this content ». rather than removing falsified media, given the associated risks to free speech).
So for AI-generated or otherwise manipulated media on Meta platforms like Facebook and Instagram, the playbook seems to be: more tags, fewer takedowns.
Meta said it would stop removing content solely based on its current fake video policy in July, adding in a blog post published Friday that: “This timeline gives people time to understand the self-disclosure process before we stop we remove the smallest subset of manipulated media.”
The change in approach may be aimed at responding to growing legal requirements for Meta regarding content containment and systemic risk, such as the European Union’s Digital Services Act. Since last August EU law has applied a set of rules to its two main social networks that require Meta to walk a fine line between cleaning up illegal content, mitigating systemic risks and protecting free speech. The bloc is also putting additional pressure on platforms ahead of European Parliament elections in June, including urging tech giants to watermark deepfakes where technically possible.
The upcoming US presidential election in November is also likely to preoccupy Meta.
Criticism of the Supervisory Board
Meta’s advisory board, which the tech giant funds but allows to operate at arm’s length, reviews a small percentage of its content control decisions but can also make policy recommendations. Meta is not bound to accept the Board’s proposals, but in this case has agreed to modify its approach.
In a suspension Posted on Friday, Monika Bickert, Meta’s Vice President of Content Policy, said the company is amending its policies on AI-generated content and manipulated media based on the board’s feedback. “We agree with the Board’s argument that our existing approach is too narrow, covering only videos that are created or modified by artificial intelligence to make a person appear to be saying something they did not say,” he wrote.
In February, the Supervisory Board urged Meta to review its approach to AI-generated content after considering the case of an edited video of President Biden, which was edited to imply a sexual motive in a platonic kiss he gave his granddaughter.
While the The board agreed with Meta’s decision to leave certain content open, attacking its manipulated media policy as “incoherent” — noting, for example, that only applies to AI-generated video, leaving other fake content (like more basic distorted video or audio) to weed out.
Meta seems to have taken the critical feedback on board.
“Over the past four years, and especially in the last year, people have been developing other kinds of realistic AI-generated content, such as audio and photos, and this technology is evolving rapidly,” Bickert wrote. “As the Board noted, it is equally important to address manipulation that shows a person doing something they did not do.
“The Board also argued that we risk unnecessarily limiting freedom of expression when we remove manipulated media that does not violate our community standards. It recommended a “less restrictive” approach to manipulated media such as boxed labels.’
Earlier this year, Meta announced that it is working with others in the industry to develop common technical standards for AI content recognition, including video and audio. It builds on this effort to expand labeling of synthetic media now.
“Our ‘Made with AI’ tags on AI-generated video, audio and images will be based on detection of AI images shared in the industry or people who self-disclose as uploading AI-generated content,” said Bickert. noting that the company already applies “Imagined with AI” tags to photorealistic images created using its own Meta AI feature.
The expanded policy will cover “a broader range of content in addition to the fraudulent content that the Oversight Board recommended flagging,” per Bickert.
“If we determine that digitally created or modified images, video, or audio pose a particularly high risk of misleading public materials on an important topic, we can add a more prominent label so people have more information and content,” he wrote. “This holistic approach gives people more information about the content so they can better evaluate it and thus have context if they see the same content elsewhere.”
Meta said it won’t remove fake content — whether AI-based or otherwise — except violates other policies (such as voter interference, intimidation and harassment, violence and incitement, or other community standards issues); Instead, as noted above, it can add “informative labels and context” to certain high public interest scenarios.
Meta’s blog post highlights a network of nearly 100 independent auditors with which it states it is working to help identify risks associated with counterfeit content.
These external entities will continue to monitor false and misleading content generated by artificial intelligence, per Meta. When they rate content as “Incorrect or Changed,” Meta said it will respond by implementing algorithm changes that reduce reach — meaning things will appear lower in Streams so fewer people see them. those eyes landing on him.
These third-party content checkers look poised to deal with an increasing workload as synthetic content proliferates, driven by the explosion of artificial intelligence tools. And because more of them look set to remain on Meta’s platforms as a result of this policy change.