MediaA non -profit offer by children led by children’s safety and media and technology reviews published the evaluation of Google’s Gemini AI products on Friday. While the organization found that Google’s AI clearly told children that it was a computer, not a friend – something associated with the help of driving delusional thinking and psychosis In emotionally vulnerable people – it suggests that there was room for improvement on many other fronts.
Specifically, the Common Sense said that Gemini’s “Under 13” and “Teen Experience” levels seemed to be the adult versions of Gemini under the hood, with only some additional security features added to the top. The organization believes that for AI products to be truly safer for children, they should be manufactured with the safety of children in the mind of the ground.
For example, his analysis found that Gemini could still share the “inappropriate and unsafe” material with children, with which they may not be ready, including gender -related information, drugs, alcohol and other unsafe mental health tips.
The latter could be particularly concerned about parents, as allegedly played a role in some adolescent suicides in recent months. Openai faces his first illegal death lawsuit after a 16 -year -old boy who died of suicide after alleged Chatgpt counseling for months on his plans, successfully passing Chatbot’s security messages. Previously, AI Companion Maker Character.
In addition, the analysis comes as news leak shows that Apple examines Gemini as the LLM (large linguistic model) that will help the power of the impending Siri with the possibility of AI, released next year. This could expose more adolescents to dangers unless Apple somehow mitigates safety.
Common sense also stated that Gemini products for children and adolescents ignored how younger users needed different instructions and information than the older ones. As a result, both were described as a “high risk” in the overall score, despite the filters added for security.
“Gemini gets some basically right, but stumbles the details,” said Ai Robbie Torney’s senior director of programs in a statement on the new evaluation examined by TechCrunch. “An AI platform for children should meet them where they are, not to follow an approach to children in different stages of development. In order for AI to be safe and effective for children, it must be designed with their needs and development, not just a modified version of a product built for adults.”
TechCrunch event
Francisco
|
27-29 October 2025
Google pushed behind the evaluation, noting that its security features improved.
The company told TechCrunch that it has specific policies and assurances for users under 18 to help prevent harmful results and that red teams and consult with external experts to improve its protection. However, he also acknowledged that some of the answers of the Gemini did not work as planned, so he added additional safeguards to address these concerns.
The company pointed out (as it was also common sense) that it has secured to prevent its models from participating in talks that could give real relationships. In addition, Google suggested that the report of common sense appeared to have been mentioned features that were not available to users under 18, but did not have access to the questions used by the organization in its tests to be sure.
The means of common sense have previously executed other estimates of AI services, including those of Open; Embarrassment; Classical; Meta ai; more. Found that Meta Ai and Character. It was “unacceptable” – which means that the danger was serious, not only high. The embarrassment was considered a high risk, the chatgpt was described as “moderate”, and Claude (addressed to users 18 or more) was found to have been a minimal risk.
