Texas Attorney General Ken Paxton began a research on both the Meta Ai Studio and the character. press release issued on Monday.
“In today’s digital age, we must continue to strive to protect Texas’ children from misleading and exploitative technology,” Paxton said. “With the presentation as sources of emotional support, AI platforms can mislead vulnerable users, especially children, believing that they receive legal mental health care.
The researcher comes a few days after the announcement of Meta’s research after a report found that AI Chatbots interacting inappropriately with children, including flirting.
The Texas Attorney’s Office has accused Meta and Character. The creation of AI people who present as “professional therapeutic tools, despite the fact that they do not have appropriate medical credentials or supervision”.
Among the millions of AI people available in character. Psychologist He has seen high demand among new starting users. Meanwhile, META does not offer treatment bots for children, but there is nothing that stops children from using Meta Ai Chatbot or one of those created by third parties for therapeutic purposes.
“We clearly note AIS and help people better understand their limitations. They include a denial that the answers are created by AI – not people,” Meta spokesman Ryan Daniels said in TechCrunch. “These AISs are not licensed professionals and our models are designed to direct users to look for specialized medical or security professionals when necessary.”
However, TechCrunch noted that many children may not understand – or simply ignore – such responses. We asked META what additional safeguards need to protect minors using chatbots.
TechCrunch event
Francisco
|
27-29 October 2025
For his part, the character includes prominent disclaimers in every conversation to remind users that a “character” is not a real person and everything they say should be treated as fiction, according to a character spokesman. He noted that starting adds additional responses when users create characters with the words “psychologist”, “therapist” or “doctor” so as not to rely on them for all kinds of professional tips.
In his statement, Paxton also observed that although AI Chatbots claim that the “terms of service provide that users’ interactions are recorded, monitored and exploited for targeted advertising and algorithmic development, raising serious concerns about private life.”
According to Meta’s Privacy PolicyMeta collects prompts, feedback and other interactions with AI Chatbots and all posts to “improve AIS and related technology”. Politics explicitly says anything about advertising, but states that information can be shared with third parties, such as search engines, for “more personalized results”. Given the business model based on META ads, this is effectively translated into targeted advertising.
Character. privacy It also emphasizes how IDs, start -ups, location information and more information about the user, including browser and applications use platforms. Watch users in ads in Tiktok, YouTube, Reddit, Facebook, Instagram and Discord, which can be connected to a user’s account. This information is used to train AI, customize the service to personal preferences and provide targeted advertising, including sharing data with advertisers and analysis providers.
A character representative.
The spokesman also confirmed that the same privacy policy applies to all users, even adolescents.
TechCrunch asked Meta that this follow -up is also done to children and will inform this story if we hear back.
Both Meta and the character say their services are not designed for children under 13. This said that Meta was set on fire because it failed to have police accounts created by children under 13 and the child -friendly characters have been clearly designed to attract younger users. CEO of the start, Karandeep Anand, has also said that his six -year -old daughter It uses the platform chatbots under its supervision.
This type of data collection, targeted advertising and algorithmic exploitation is precisely what legislation such as Kosa (Child Security Children) is intended to protect against. Kosa was until he had a strong bilateral support last year, but stopped after a great deal of boost by the technology industry interests. Meta in particular has developed a terrible pressure machine, warning legislators that the wide orders of the bill would underestimate its business model.
Kosa was reunited with the Senate in May 2025 by Senators Marsha Blackburn (R-TN) and Richard Blumenthal (D-CT).
Paxton has issued civil exploratory requirements – legal commands that require a company to produce documents, data or testimony during government survey – to companies to determine if they have violated the laws of the protection of Texas consumers.
This story was informed by comments by a representative.
We always try to evolve and by providing some image of your perspective and feedback on TechCrunch and our coverage and events, you can help us! Complete this survey to let us know how we are doing and to get the chance to win a prize in return!
