Sept 29 (Reuters) - OpenAI is rolling out parental
controls for ChatGPT on the web and mobile, following a lawsuit
by the parents of a teen who died by suicide after the
artificial intelligence startup's chatbot allegedly coached him
on methods of self-harm.
The company said on Monday the controls will allow parents
and teens to link accounts for stronger safeguards for
teenagers.
U.S. regulators are
increasingly scrutinizing
AI companies over the potential negative impacts of
chatbots. In August, Reuters had reported how Meta's AI
rules allowed flirty conversations with kids.
Under the new measures, parents will be able to reduce
exposure to sensitive content, control whether ChatGPT remembers
past chats, and decide if conversations can be used to train
OpenAI's models, the Microsoft ( MSFT )-backed company said on X.
Parents will also be allowed to set quiet hours that block
access during certain times and disable voice mode as well as
image generation and editing, OpenAI said. However, parents will
not have access to a teen's chat transcripts, the company added.
In rare cases where systems and trained reviewers detect
signs of a serious safety risk, parents may be notified with
only the information needed to support the teen's safety, OpenAI
said.
Meta had also announced new teenager safeguards to its AI
products last month. The company said it will train systems to
avoid flirty conversations and discussions of self-harm or
suicide with minors and temporarily restrict access to certain
AI characters.