OpenAI forms new team to study child safety


Under scrutiny from activists – and parents – OpenAI has formed a new team to study ways to prevent its AI tools from being misused or abused by children.

In a new job posting on its career page, OpenAI revealed the existence of a child safety team, which the company says works with platform policy, legal and investigative groups within OpenAI as well as external partners to manage “processes, incidents and notices” relating to minor users.

The team is currently looking to hire a Child Safety Specialist, who will be responsible for enforcing OpenAI’s policies in the context of AI-generated content and working on review processes related to “sensitive” content » (presumably related to children).

Technology providers of a certain size devote a large amount of resources to complying with laws such as the U.S. Children’s Online Privacy Rule, which mandates control over what children can – and cannot – view – access the web, as well as company data types. can collect on them. So the fact that OpenAI has hired child safety experts isn’t a complete surprise, especially if the company one day expects a large base of underage users. (OpenAI’s current terms of service require parental consent for children ages 13-18 and prohibit use for children under 13.)

But the formation of the new team, which comes several weeks after OpenAI announcement partnered with Common Sense Media to collaborate on child-friendly AI guidelines and landed its first education customeralso suggests distrust on OpenAI’s part of policies surrounding AI use by minors – and negative press.

Children and adolescents are increasingly turning to GenAI tools for help not only with school work but personal problems. According to a survey from the Center for Democracy and Technology, 29% of children say they have used ChatGPT to manage anxiety or mental health issues, 22% for issues with friends, and 16% for family conflicts.

Some see this as a growing risk.

Last summer, schools and colleges precipitate to ban ChatGPT due to fears of plagiarism and misinformation. Since then, some have reversed their prohibitions. But not everyone is convinced of the positive potential of GenAI, emphasizing investigations like the UK’s Safer Internet Centre, which found that more than half of children (53%) say they have seen people their age using GenAI in a negative way – for example by creating credible false information or images used to upset someone.

In September, OpenAI released documentation for ChatGPT in Classrooms with prompts and FAQs to offer teachers guidance on using GenAI as a teaching tool. In one of the support articlesOpenAI acknowledged that its tools, particularly ChatGPT, “may produce results that are not appropriate for all audiences or all ages” and advised “exercise caution” when exposing to children, even to those who meet the age requirements.

Calls for guidelines on children’s use of GenAI are growing.

The United Nations Educational, Scientific and Cultural Organization (UNESCO) late last year pushed for governments to regulate the use of GenAI in education, including implementing age limits for users and safeguards around data protection and user privacy. “Generative AI can be a tremendous opportunity for human development, but it can also cause harm and bias,” said Audrey Azoulay, director-general of UNESCO, in a press release. “It cannot be integrated into education without public engagement and without the necessary guarantees and regulations from governments. »


Leave a Comment

Your email address will not be published. Required fields are marked *