OpenAI established a dedicated Child Safety team dedicated to addressing incidents related to the use of Artificial Intelligence (AI) tools by underage users. The main objective is to ensure that the company’s technologies are not used to the detriment of children.
Under the leadership of Sam Altman, the company continues to implement measures to ensure that its innovations do not negatively affect users, especially minors.
The team’s existence was revealed through a job offer posted on the OpenAI website, where the company is looking to hire a child safety policy expert to join the initiative.
This team is responsible for managing processes, incidents, and reviews to safeguard the OpenAI online ecosystem, working closely with the legal, platform policy and research department.
The contracted specialist will be responsible for ensuring compliance with the company’s policies regarding AI-generated content on its tools, such as ChatGPT, to prevent the posting of sensitive or harmful material.
In addition, the expert is expected to provide guidance on compliance with content policies and help improve review and response processes related to confidential content.
OpenAI is focused on restricting access to content that is inappropriate for minors, in line with privacy and child protection laws. Recently, they announced a partnership with Common Sense Media to evaluate the suitability of their technologies for young users.
The goal is to provide a safe environment for children and educate about the responsible use of tools such as ChatGPT, fostering effective collaboration between engineering, policy and research teams.
Currently, OpenAI states that tools like ChatGPT are not suitable for children under the age of 13, and requires parental or guardian consent for use by users between the ages of 13 and 18.
The company emphasizes adult supervision when interacting with these services, even for those who meet the age requirements, especially in educational settings.
(Reference image source: Levart_Photographer, Unsplash)