Teenagers are trying to figure out where they fit into a world that is changing faster than any generation before them. They are bursting with emotions, overstimulated and chronic internet users. And now, AI companies have given them chatbots designed to never stop talking. The results were disastrous.
One company that understands this effect is Character.AI, an artificial intelligence role startup facing lawsuits and public outcry after at least two teenagers died by suicide after prolonged conversations with AI chatbots on its platform. Now, Character.AI is making changes to its platform to protect teenagers and children, changes that could affect the startup’s bottom line.
“The first thing we decided as Character.AI is that we will remove the ability for users under 18 to engage in open conversations with AI on our platform,” Karandeep Anand, CEO of Character.AI, told TechCrunch.
Open conversation refers to the unlimited back-and-forth that occurs when users give a prompt to a chatbot and it responds with follow-up questions that say the experts designed to keep users engaged. Anand argues that this type of interaction—where the AI acts as a conversationalist or friend rather than a creative tool—is not only dangerous for children, but also misaligned with the company’s vision.
The startup is trying to shift from “AI companion” to “role-playing platform”. Instead of chatting with an AI friend, teens will use prompts to co-create stories or create graphics. In other words, the goal is to shift engagement from talking to creating.
Character.AI will phase out teen chatbot access by November 25, starting with a two-hour daily limit that gradually shrinks to zero. To ensure that this ban remains on users under 18, the platform will develop an internal age verification tool that analyzes user behavior, as well as third-party tools such as Persona. If those tools fail, Character.AI will use facial recognition and identity checks to verify ages, Anand said.
Another movement follows youth protection which has implemented Character.AI, including the introduction of a parental information tool, filtered characters, limited romantic chats and dedicated time notifications. Anand told TechCrunch that these changes lost the company much of its under-18 user base, and he expects these new changes to be just as unpopular.
Techcrunch event
San Francisco
|
27-29 October 2025
“It’s safe to assume that many of our teenage users will probably be disappointed … so we expect some churn to happen further,” Anand said. “It’s hard to guess — will they be completely churned out, or will some of them transition to these new experiences that we’ve been building for almost seven months now?”
As part of Character.AI’s push to transform the platform from a chat-centric app to a “complete content-driven social platform”, the startup recently rolled out several new entertainment-focused features.
In June, Character.AI released AvatarFX, a video production model that turns images into animated videos. Scenes, an interactive, pre-filled story where users can step into narratives with their favorite characters. and Flows, a feature that allows dynamic interactions between any two characters. In August, Character.AI launched Community Feed, a social feed where users can share their characters, scenes, videos and other content they create on the platform.
In a statement aimed at users under 18, Character.AI apologized for the changes.
“We know most of you use Character.AI to enhance your creativity in ways that stay within the boundaries of our content rules,” the statement reads. “We’re not taking this step to remove character chat — but we think it’s the right thing to do given the questions that have been raised about how teens interact and should interact with this new technology.”
“We will not close the app for under-18s,” Anand said. “We’re only closing under-18 chats because we’re hoping that under-18s will migrate to these other experiences and that those experiences get better over time. So doubling down on AI games, AI short videos, AI storytelling in general. That’s the big bet we’re making to bring back under-18s if they do.”
Anand acknowledged that some teens may flock to other AI platforms, such as OpenAI, that allow them to have open conversations with chatbots. OpenAI also came under fire recently after a teenager killed himself after long chats with ChatGPT.
“I really hope that we’re leading the way in setting a standard in the industry that for under-18s, open conversations is probably not the path or the product to offer,” Anand said. “For us, I think the exchanges are appropriate. I have a six-year-old and I want to make sure she grows up in a very safe environment with artificial intelligence in a responsible way.”
Character.AI makes these decisions before regulators force its hand. On Tuesday, Sens. Josh Hawley (R-MO) and Richard Blumenthal (D-CT) said they would enact legislation to ban artificial intelligence chatbot companions from being available to minors, following complaints from parents who said the products pushed their children into sexual conversations, self-harm and suicide. Earlier this month, California became the first state to regulate companion AI chatbots, holding companies accountable if their chatbots don’t meet the law’s safety standards.
In addition to these platform changes, Character.AI said it will establish and fund the AI Safety Lab, an independent non-profit organization dedicated to safety-aligning innovation for future AI entertainment functions.
“A lot of work is being done in the industry around coding and development and other use cases,” Anand said. “We don’t think there’s enough work yet on AI-powered entertainment, and security is going to be very critical to that.”
