Meta’s internal child safety documents have been unsealed as part of a lawsuit filed by the New Mexico Department of Justice against both Meta and its CEO, Mark Zuckerberg. The documents reveal that Meta not only deliberately marketed its messaging platforms to children, but also knew about the vast amount of inappropriate and sexual content being shared between adults and minors.
The documents, unsealed Wednesday as part of an amended complaint, highlight multiple instances of Meta employees raising concerns internally about the exploitation of children and teens on the company’s private messaging platforms. Meta recognized the risks Messenger and Instagram DMs posed to underage users, but failed to prioritize implementing safeguards or permanently blocking child safety features because they weren’t profitable.
In a statement to TechCrunch, New Mexico Attorney General Raul Torrez said Meta and Zuckerberg allowed child predators to sexually exploit children. He recently raised concerns about Meta enabling end-to-end encryption protection for Messenger, which it started rolling out last month. In a separate filing, Torrez pointed out that Meta failed to address child exploitation on its platform and that encryption without proper safeguards would put minors at further risk.
“For years, Meta employees have been trying to raise the alarm about how decisions made by Meta executives subject children to dangerous inducements and child exploitation,” Torrez continued. “Meta executives, including Mr. Zuckerberg, consistently made decisions that put growth over the safety of children. While the company continues to downplay the illegal and harmful activity that children are exposed to on its platforms, Meta’s internal data and presentations show that the problem is serious and pervasive.”
Originally filed in December, the lawsuit alleges that Meta platforms such as Instagram and Facebook have become “a marketplace for predators in search of children to prey on” and that Meta has failed to remove many instances of child sexual abuse material (CSAM). after being mentioned on Instagram and Facebook. When creating decoy accounts purporting to be 14 years old or younger, the New Mexico Department of Justice said Meta’s algorithms display CSAM, as well as accounts that facilitate the buying and selling of CSAM. According to a Press release about the lawsuit, “certain child exploitation content is more than ten times more prevalent on Facebook and Instagram than on Pornhub and OnlyFans.”
In response to the complaint, a Meta spokesperson told TechCrunch: “We want teens to have safe, age-appropriate experiences online, and we have over 30 tools to support them and their parents. We’ve spent a decade working on these issues and hiring people who have dedicated their careers to keeping young people safe and supported online. The complaint mischaracterizes our work using selective quotes and selected documents.”
The unsealed documents show that Meta deliberately tried to recruit children and teenagers to Messenger, limiting security features in the process. A 2016 presentation, for example, raised concerns about the company’s declining popularity among teenagers, who spent more time on Snapchat and YouTube than Facebook, and outlined a plan to “win over” new teen users. An internal email from 2017 notes that a Facebook executive objected to scanning Messenger for “harmful content” because it would be a “competitive disadvantage against other apps that may offer greater privacy.”
The fact that Meta knew its services were so popular with children makes its failure to protect young users from sexual exploitation “even more egregious,” the documents said. A 2020 presentation noted that the company’s “End Game” was set to “become the leading messaging app for kids in the US by 2022.” He also noted Messenger’s popularity among 6- to 10-year-olds.
Meta’s acknowledgment of child safety issues on its platform is particularly damning. An internal presentation from 2021, for example, estimated that 100,000 children a day were sexually harassed on Meta’s messaging platforms and received sexual content such as pictures of adult genitalia. In 2020, Meta employees were concerned about the platform’s possible removal from the App Store after an Apple executive complained that their 12-year-old child had been solicited by Instagram.
“This is the kind of thing that infuriates Apple,” an internal document said. The workers also questioned whether Meta had a timeline to stop “adults messaging minors on IG Direct.”
Another internal document from 2020 revealed that safeguards in place on Facebook, such as preventing “unconnected” adults from messaging minors, did not exist on Instagram. Applying the same safeguards to Instagram was “not a priority.” Meta considered allowing adult relatives to reach children on Instagram Direct a “big growth gamble” — which a Meta employee criticized as a “less than compelling” reason for failing to implement safety features. The employee also noted that grooming appeared twice as often on Instagram as on Facebook.
Meta addressed grooming in another child safety presentation in March 2021, which stated that “measurement, detection and safeguards” were “more mature” on Facebook and Messenger than on Instagram . The presentation noted that Meta was “underinvested in sexuality on IG,” particularly sexual comments left on secondary creators’ posts, and described the problem as a “terrible experience for creators and attendees.”
“Exploitation of children is a horrific crime, and online predators are determined criminals,” a Meta spokesperson told TechCrunch. “We use advanced technology, hire child safety experts, report content to the National Center for Missing and Exploited Children, and share information and tools with other law enforcement agencies and authorities, including attorneys general, to help root out predators. In just one month, we disabled more than half a million accounts for violating our child safety policies.”
Meta has long faced scrutiny for its failures to adequately mitigate CSAM. Major US-based social media platforms are legally required to report CSAM cases to the National Center for Missing & Exploited Children’s (NCMEC) CyberTipline. According to NCMEC most recently published data As of 2022, Facebook filed approximately 21 million CSAM reports, which made up approximately 66% of all reports submitted to CyberTipline that year. When mentions from Instagram (5 million) and WhatsApp (1 million) are included, Meta platforms are responsible for approximately 85% of all mentions made to NCMEC.
This disproportionate number could be explained by Meta’s overwhelmingly large user base of over 3 billion daily active users. A Meta spokesperson said these numbers are the result of proactive detection. However, in response to many inquiries, international leaders have argued that Meta is not doing enough to mitigate these millions of reports. In June, Meta told the Wall Street Journal that it had taken down 27 pedophile networks over the past two years, yet investigators were still able to uncover many interconnected accounts that buy, sell and distribute CSAM. In the five months following the Journal’s report, it found that Meta’s recommendation algorithms continued to serve CSAM. Although Meta removed some hashtags, other pedophile hashtags appeared in their place.
Meanwhile, Meta is confronted another lawsuit by 42 US attorneys general on the impact of platforms on children’s mental health.
“We find that Meta knows that its social media platforms are being used by millions of children under the age of 13 and is illegally collecting their personal information,” said California Attorney General Rob Bonda. he told TechCrunch In November. “It shows this common practice where Meta says one thing in its public comments to Congress and other regulators, while internally it says something else.”
Update, 1/17/24, 11:30 PM ET with comments from Meta.