AI startup Anthropic is changing its policies to allow minors to use its productive AI systems — at least in some cases.
Announced in a Position on the company’s official blog on Friday, Anthropic will begin allowing teens and tweens to use third-party apps (but not its own applicationsnecessarily) powered by its AI models, as long as the developers of these apps implement specific security features and disclose to users which Anthropic technologies they use.
In a support article, Anthropic lists several safety measures that developers creating AI apps for minors should include, such as age verification systems, content moderation and filtering, and educational resources for the “safe and responsible” use of AI for minors. The company also says it may make available “technical measures” intended to tailor AI product experiences for minors, such as a “child safety system prompt” that developers targeting minors should implement.
Developers using Anthropic’s AI models must also comply with “applicable” child safety and data privacy regulations, such as the Children’s Online Privacy Protection Act (COPPA), the federal USA protecting the online privacy of children under 13. Anthropic says it plans to “periodically” review apps for compliance, suspend or terminate the accounts of those who repeatedly violate the compliance requirement, and require developers to “clearly state” on publicly displayed websites or documentation that they are in compliance.
“There are some use cases where AI tools can provide significant benefits to younger users, such as test preparation or teaching support,” Anthropic writes in the post. “With this in mind, our updated policy allows organizations to integrate our API into their minors products.”
Anthropic’s policy change comes as children and teens are increasingly turning to AI-powered tools to help with more than just homework but personal issues and as competing AI producers — including Google and OpenAI — explore more use cases aimed at children. This year, OpenAI formed a new group to study child safety and announced a partnership with Common Sense Media to collaborate on child-friendly AI guidelines. And Google made its Bard chatbot, since renamed Gemini, available to teens in English in select regions.
According to a voting from the Center for Democracy and Technology, 29% of kids report using genetic AI like OpenAI’s ChatGPT to deal with stress or mental health issues, 22% for issues with friends, and 16% for family conflicts .
Last summer, schools and colleges rushed to ban productive AI apps — especially ChatGPT — over fears of plagiarism and misinformation. Since then, some have vice versa their prohibitions. But not everyone is convinced of the potential of generative artificial intelligence for good, it seems investigations such as the UK Safer Internet Centre, which found that over half of children (53%) report seeing people their age use genetic AI in a negative way — for example creating believable false information or images used to annoy someone (incl pornographic deepfakes).
Calls for guidelines on the use of genetic artificial intelligence by children are growing.
The United Nations Educational, Scientific and Cultural Organization (UNESCO) late last year pushed for governments to regulate the use of genetic artificial intelligence in education, including implementing age limits for users and safeguards for data protection and user privacy. “Genetic artificial intelligence can be a huge opportunity for human development, but it can also cause harm and prejudice,” said Audrey Azoulay, director-general of UNESCO, in a press release. “It cannot be integrated into education without public participation and the necessary safeguards and regulations from governments.”