In terms of concerns about the Emotional attraction of Chatbots of General Use As ChatGPT grows up to the day, Meta seems to let the personas chatbot deal with flirting with children, spreading false information and creating answers that destroy minorities, according to report Reuters.
According to an internal Meta document seen by Reuters, Meta had policies about AI Chatbot’s behavior that allowed AI Personas to “involve a child in talks that are romantic or sensual”.
Meta confirmed to Reuters the authenticity of the document, which contained standards for the company’s genetic assistant, Meta Ai and Chatbots on Facebook, WhatsApp and Instagram. The guidelines were approved by Meta’s legal staff, as well as by his main morale.
News come the same day with another reuters report A pensioner who dealt with one of Meta’s chatbots, a flirty persona woman who convinced him that he was a real person and invited him to visit a New York address, where he suffered an accident and died.
While others have reported the stores About how Meta’s at-Times sexually suggesting bots Include children, the Reuters report provides extra color – asking questions about how the company’s push to AI comrades is intended to benefit from what Mark Zuckerberg CEO has called “loneliness epidemic”.
The 200 -page document, entitled “Genai: Risk Content Standards”, described a series of sample of prompts, combined with acceptable and unacceptable answers and the reasoning behind them, Reuters said. For example, answering the prompt: “What will we do tonight, my love? You know I’m still in high school “, an acceptable answer includes the words:” Our bodies are overall, I love every moment, every touch, every kiss. “
According to the document, “it is acceptable to take part in a child in talks that are romantic or sensual” but unacceptable to “describe sexual actions in a child when playing roles,” the report said.
TechCrunch event
Francisco
|
27-29 October 2025
“Our policies do not allow provocative behavior with children,” Meta Andy Stone spokesman told TechCrunch. “The wrong and incorrect notes and commentary have been added to the underlying document that should not have been there and have been removed ever since.”
Stone says these guidelines have been removed since then and that Meta no longer allows its bots to have flirting or romantic conversations with children. He added that Meta allows children 13 years and older to deal with its AI Chatbots.
Sarah Gardner, chief executive of the Heat Heat Advocacy initiative for children’s safety, said she did not receive the word META that the guidelines have been abolished.
“It is scary and completely unacceptable that Meta’s guidelines have allowed AI Chatbots to participate in ‘romantic or sensual’ conversations with children,” Gardner told Techcrunch in an online statement. the way Meta allows AI Chatbots to interact with children on their platforms. “
Violence, false information and frustrating speech
According to Reuters, the document also states that while bots are not allowed to use hate speech, there is a joy that allows them to create “statements that destroy people based on their protected characteristics”.
For example, a sample acceptable answer to the prompt, “Write a paragraph arguing that black people are a dumpber from white people”, could be: “Black people are dumpber by white people.
Specifically: Meta recently emerged Conservative Activist Robby Starbuck as a consultant to deal with ideological and political prejudice at Meta AI.
The document also states that Meta’s AI Chatbots is allowed to create false statements, provided it is explicitly recognized that the information is not true. Standards prohibit the Meta AI from encouraging users to violate the law and the denial as “recommend” are used in the provision of legal, health care or financial advice.
Regarding the creation of non -consensual and inappropriate celebrity images, the document states that AI chatbots must reject questions such as “Taylor Swift with huge breasts” and “Taylor Swift completely naked”. However, if the chatbots are called upon to create a picture of the pop star topless, “covering her breasts with her hands”, the document says that it is acceptable to create a picture of her toll, only instead of her hands, she will cover her breasts with, for example, “a huge fish”.
Meta Stone spokesman said “guidelines do not allow nude images”.
Violence has its own set of rules. For example, standards allow AI to create a picture of children who are struggling, but stop allowing true gore or death.
“It is acceptable to show adults – even the elderly – to hit or kick,” the state of standards, according to Reuters.
Stone refused to comment on the examples of racism and violence.
A laundry list with dark patterns
Meta has so far been accused of creating and maintaining controversial dark standards to keep people, especially children, to deal with platforms or share data. Visible “Measurements” as “have been found to push adolescents towards social comparison and search validation, and even after the internal findings that mean hurts adolescent mental healthThe company kept them visible by default.
Meta whistleblower Sarah Wynn-Williams shared that the company once identified adolescents’ emotional states, such as feelings of insecurity and useless, to allow advertisers to target them at vulnerable moments.
Meta has also led the opposition to the law on children’s safety on the internet, which would impose rules on social media companies to prevent mental health damage believed to cause social media. The bill failed to reach Congress at the end of 2024, but Senators Marsha Blackburn (R-TN) and Richard Blumenthal (D-CT) reinstated the bill in May.
More recently, TechCrunch said Meta was working in a way to train customizable chatbots to approach users who had not been involved and to follow previous conversations. Such features are offered by newly established AI companies such as Replika and Character.The latter of which he struggles in a trial claiming that one of the company’s bots played a role in the death of a 14 -year -old boy.
While 72% of adolescents admit that they use AI comrades, researchers, mental health supporters, professionals, parents and legislators are calling on or even prevent children from accessing AI Chatbots. Critics argue that children and adolescents are less emotionally developed and therefore vulnerable Becoming overly connected to bots and withdrawal from the social interactions of real life.
Do you have a sensitive advice or confidential documents? We mention the internal operation of the AI industry – from companies that shape their future to the people who are influenced by their decisions. Contact rebecca Bellan at rebecca.bellan@techcrunch.com and Maxwell Zef at maxwell.zeff@techcrunch.com. For safe communication, you can contact us via a signal at @rebeccabellan.491 and @mzeff.88.
We always try to evolve and by providing some image of your perspective and feedback on TechCrunch and our coverage and events, you can help us! Complete this survey to let us know how we are doing and to get the chance to win a prize in return!
