Under scrutiny from activists – and parents – OpenAI has formed a new group to study ways to prevent children from misusing or abusing its AI tools.
In a new job listing on its career page, OpenAI reveals the existence of a Child Safety team, which the company says works with platform policy, legal and research teams within OpenAI as well as external partners to manage “processes, incidents and assessments” related to underage users.
The team is currently looking to hire a Child Safety Enforcement Specialist who will be responsible for enforcing OpenAI’s policies on AI-generated content and work on review processes related to “sensitive” (possibly relevant to children) content.
Technology vendors of a certain size devote considerable resources to complying with laws such as the US Children’s Online Privacy Protection Rule, which imposes controls on what children can and cannot access on the Web, and what kind of companies data it may collect on them. So the fact that OpenAI is hiring child safety experts comes as no surprise, especially if the company expects a significant underage user base one day. (OpenAI’s current terms of use require parental consent for children ages 13 to 18 and prohibit use by children under 13.)
But the formation of the new team, which comes several weeks after OpenAI announced a partnership with Common Sense Media to collaborate on kid-friendly AI guidelines and land its first educational client, also suggests a reticence about OpenAI to violate policies related to use of artificial intelligence by minors — and negative press.
Children and teenagers are increasingly turning to GenAI tools for help with more than just homework but personal matters. According to a voting from the Center for Democracy and Technology, 29% of kids report using ChatGPT to deal with stress or mental health issues, 22% for issues with friends, and 16% for family conflicts.
Some see it as a growing risk.
Last summer, schools and colleges rushed to ban ChatGPT due to plagiarism and misinformation fears. Since then, some have vice versa their prohibitions. But not everyone is convinced of GenAI’s potential for good, it seems investigations such as the UK’s Safer Internet Centre, which found that over half of children (53%) report seeing people their age use GenAI in a negative way — for example creating believable fake information or images used to upset someone.
In September, OpenAI published documentation for ChatGPT in the Classroom with prompts and FAQs to offer guidance to educators on using GenAI as a teaching tool. In one of the support articlesOpenAI acknowledged that its tools, specifically ChatGPT, “may produce results that are not appropriate for all audiences or all ages” and advised “caution” with exposure to children — even those that meet the requirements of age.
Calls for guidance on the use of GenAI for children are growing.
The United Nations Educational, Scientific and Cultural Organization (UNESCO) late last year pushed for governments to regulate the use of GenAI in education, including implementing age limits for users and guardrails for data protection and user privacy. “Genetic artificial intelligence can be a huge opportunity for human development, but it can also cause harm and prejudice,” said Audrey Azoulay, director-general of UNESCO, in a press release. “It cannot be integrated into education without public participation and the necessary safeguards and regulations from governments.”