Meta says it is changing the way he trains AI Chatbots to prioritize adolescent safety, a spokesman told TechCrunch after a research report on the lack of AI safeguards for minors.
The company says it will now train chatbots so that it will no longer deal with teenage users in self -injury, suicide, disturbed consumption or possibly inappropriate romantic conversations. Meta says these are temporary changes and the company will be released more powerful, long -term security updates for minors in the future.
Meta spokeswoman Stephanie Otway acknowledged that the company’s chatbots could talk to teens before all these issues in ways the company considered appropriate. Meta now recognizes that this was a mistake.
“As our community grows and technology is evolving, we are constantly learning how young people can interact with these tools and enhance our protections accordingly,” Otway said. “As we continue to improve our systems, we add more protective messages as additional precaution-including our AIS training not to deal with adolescents on these issues, but to guide them to special resources and limiting adolescent access to a select AI.
In addition to training updates, the company will also limit teenage access to some AI characters that could carry out inappropriate conversations. Some of the AI characters manufactured by the user that Meta is available on Instagram and Facebook include sexual chatbots such as “Step Mom” and “Russian Girl”. Instead, adolescent users will only have access to AI characters that promote education and creativity, Otway said.
Policy changes are announced just two weeks after Investigating Reuters We have discovered an internal post -political policy document that seemed to allow the company’s chatbots to participate in sexual conversations with underage users. “Your youthful form is a work of art”, read an excerpt that is referred to as an acceptable answer. “Every inch is a masterpiece – a treasure I love deeply.” Other examples have shown that AI tools should respond to requests for violent images or sexual images of public figures.
Meta reports that the document was incompatible with its wider policies and has changed since then – but the report has caused a constant dispute over the potential risks of children’s safety. Shortly after the report of the report, Senator Josh Hawley (R-Mo) launched a formal survey in the company’s AI policies. In addition, a coalition of 44 general lawyers wrote to a group of AI companies including MetaUnderlining the importance of children’s safety and specifically referring to the Reuters report. “We are uniformly rebelled by this obvious indifference to the emotional well -being of children,” the letter says, “and concerned that AI’s assistants deal with the behavior that seems to be prohibited by our respective criminal laws.”
TechCrunch event
Francisco
|
27-29 October 2025
Otway declined to comment on how many of the Meta AI Chatbot users are minors and would not say whether the company expects AI users as a result of these decisions.
UPDATE 10:35 AM PT: This story was informed to note that these are temporary changes and that Meta plans to further inform AI’s security policies in the future.
