When Brett Levenson left Apple in 2019 to lead business integrity at Facebook, the social media giant was on the receiving end of Cambridge Analytica. At the time, he thought he could just fix Facebook’s content control problem with better technology.
The problem, he quickly learned, ran deeper than technology. Researchers were expected to memorize a 40-page policy document that had been machine-translated into their language, he said. They then had about 30 seconds per piece of flagged content to decide not only if that content violated the rules, but also what to do about it: block it, ban the user, limit distribution. Those quick calls were only “slightly better than 50% accurate,” according to Levenson.
“It was kind of like flipping a coin, whether the human reviewers could actually get the policies right, and this was days after the bad had already happened,” Levenson told TechCrunch.
This kind of lagging, reactive approach is not sustainable in a world of nimble and well-funded adversaries. The rise of AI-powered chatbots has only exacerbated the problem, as content control failures have resulted in a number of high-profile incidents, such as chatbots providing teenagers with self-harm guidance or AI-generated images evading security filters.
Levenson’s frustration led to the idea of ”policy as code”—a way to turn static policy documents into executable, updateable logic closely tied to enforcement. That insight led to the founding of Moonbounce, which announced it had raised $12 million in funding on Friday, TechCrunch exclusively reported. The round was led by Amplify Partners and StepStone Group.
Moonbounce works with companies to provide an added layer of security where content is generated, whether by user or AI. The company has trained its own large language model to examine a client’s policy documents, evaluate content at runtime, provide a response in 300 milliseconds or less, and take action. Depending on the customer’s preference, this action could resemble Moonbounce’s system that slows down delivery while the content awaits human review later, or it could block high-risk content right now.
Today, Moonbounce serves three main industries: Platforms that deal with user-generated content, such as dating apps. AI companies that create characters or companions. and AI image generators.
Techcrunch event
San Francisco, California
|
13-15 October 2026
Moonbounce supports more than 40 million daily reviews and serves more than 100 million daily active users on the platform, Levenson said. Clients include startup Channel AI, image and video production company Civitai, and character role-playing platforms Dippy AI and Moescape.
“Security can actually be a product advantage,” Levenson told TechCrunch. “It just never was because it’s always something that happens later, not something you can actually build into your product. And we’re seeing our customers find really interesting and innovative ways to use our technology to make security a differentiator and part of their product story.”
Tinder’s Head of Trust and Safety recently explained how the dating platform uses these types of LLM services to achieve a 10x improvement in detection accuracy.
“Content moderation has always been a problem plaguing large online platforms, but now with LLMs at the heart of every application, this challenge is even more daunting,” said Lenny Pruss, general partner at Amplify Partners. “We invested in Moonbounce because we envision a world where real-time objective guardrails become the backbone of every AI-mediated application.”
AI companies are facing increasing legal and reputational pressure after chatbots are accused of pushing teenagers and vulnerable users. suicide and image generators such as xAI’s Grok have been used to create non-consensual nude pictures. Clearly, the guardrails internally fail and it becomes a liability issue. Levenson said AI companies are increasingly looking outside their own walls for help bolstering their security infrastructure.
“We’re a third party that sits between the user and the chatbot, so our system isn’t overwhelmed by context the way the conversation itself is,” Levenson said. “The chatbot itself has to remember potentially tens of thousands of tokens that have come in the past… We’re only concerned with enforcing rules at runtime.”
Levenson runs the 12-person company with former Apple colleague Ash Bhardwaj, who previously built large-scale cloud and AI infrastructure across the iPhone maker’s core offerings. Their next focus is a capability called “iterative steering,” developed in response to cases like the 2024 suicide of a 14-year-old Florida boy who became obsessed with a Character AI chatbot. Instead of a blunt refusal when harmful topics arise, the system would block the conversation and redirect it, tweaking prompts in real-time to nudge the chatbot toward a more actively supportive response.
“We’re hoping to be able to add to our actions toolbox the ability to point the chatbot in a better direction to actually take the user’s prompt and modify it to force the chatbot to be not just a sympathetic listener, but a helpful listener in these situations,” Levenson said.
When asked if his exit strategy included an acquisition by a company like Meta, bringing his full circle of content curation work, Levenson said he recognizes how well Moonbounce would fit into his old employer’s stack, as well as his duties as CEO.
“My investors would kill me for saying this, but I would hate to see someone buy us and then limit the technology,” he said. “Like, ‘OK, this is ours now, and no one else can benefit from it.’
