Then Thursday was announced that it is starting to roll out more advanced AI systems to manage content enforcement as it plans to cut back on third-party vendors. Content enforcement tasks include arresting and removing content related to terrorism, child exploitation, drugs, fraud and scams.
The company says it will deploy these more advanced AI systems across all of its apps once they consistently outperform current content enforcement methods. At the same time, it will reduce its reliance on third-party vendors for content enforcement.
“While we’ll still have people censoring content, these systems will be able to take on work better suited to technology, such as repetitive graphic content reviews or areas where adversaries are constantly changing their tactics, such as with illegal drug sales or fraud,” Meta explained in a blog post.
Meta believes these AI systems can detect more breaches with greater accuracy, better prevent fraud, respond faster to real-world events and reduce over-enforcement.
The company says early tests of its AI systems have been promising, being able to detect twice as much adult sexual content as its review teams, while also reducing the error rate by more than 60 percent. It also says the systems can detect and prevent more impersonation accounts involving celebrities and other high-profile people, as well as help stop account takeovers by detecting signals such as connections from new locations, password changes or edits to a profile.
In addition, Meta says the systems can detect and mitigate about 5,000 fraud attempts a day, in which fraudsters try to trick people into giving out their login details.
“Experts will design, train, oversee and evaluate our AI systems, measuring performance and making the most complex decisions with high impact,” Meta wrote in the blog post. “For example, people will continue to play a key role in how we make the highest risk and most critical decisions, such as appealing account deactivation or reporting to law enforcement.”
The move comes as Meta has relaxed its content-censoring rules over the past year or so since President Donald Trump took office for a second term. Last year, the company ended its third-party fact-checking program in favor of an X-style Community Notes model. It also removed restrictions on “topics that are part of mainstream discourse” and said users would be encouraged to take a “personalized” approach to political content.
It also comes as Meta and other big tech companies are currently facing several lawsuits seeks to hold social media giants accountable for harming children and young users.
Meta also announced Thursday that it is launching a Meta AI support assistant that will give users access to 24/7 support. Assistant is available globally in the Facebook and Instagram apps for iOS and Android, as well as the Facebook and Instagram Help Center on desktop.
