ChatGPT Will Get Parental Controls and New Safety Features, OpenAI Says

1 day ago 13

Personal Tech|OpenAI Plans to Add Safeguards to ChatGPT for Teens and Others in Distress

https://www.nytimes.com/2025/09/02/technology/personaltech/chatgpt-parental-controls-openai.html

You have a preview view of this article while we are checking your access. When we have confirmed access, the full article content will load.

After a California teenager spent months on ChatGPT discussing plans to end his life, OpenAI said it would introduce parental controls and better responses for users in distress.

Sam Altman talking into an ear microphone as he gestures.
Sam Altman, chief executive of OpenAI, the company behind ChatGPT, which has 700 million users.Credit...Mike Kai Chen for The New York Times

Kashmir Hill

Sept. 2, 2025, 5:36 p.m. ET

ChatGPT is smart, humanlike and available 24/7. That has attracted 700 million users, some of whom are leaning on it for emotional support.

But the artificially intelligent chatbot is not a therapist — it’s a very sophisticated word prediction machine, powered by math — and there have been disturbing cases in which it has been linked to delusional thinking and violent outcomes. Last week, Matt and Maria Raine of California sued OpenAI, the company behind ChatGPT, after their 16-year-old son ended his life after months in which he discussed his plans with ChatGPT.

On Tuesday, OpenAI said it planned to introduce new features intended to make its chatbot safer, including parental controls, “within the next month.” Parents, according to an OpenAI post, will be able to “control how ChatGPT responds to their teen” and “receive notifications when the system detects their teen is in a moment of acute distress.”

This is a feature that OpenAI’s developer community has been requesting for more than a year.

Other companies that make A.I. chatbots, including Google and Meta, have parental controls. What OpenAI described sounds more granular, similar to the parental controls introduced by Character.AI, a company with role-playing chatbots, after it was sued by a Florida mother, Megan Garcia, after her son’s suicide.

On Character.AI, teenagers must send an invitation to a guardian to monitor their accounts; Aditya Nag, who leads the company’s safety efforts, told The New York Times in April that use of the parental controls was not widespread.

Robbie Torney, a director of A.I. programs at Common Sense Media, a nonprofit that advocates safe media for children, said parental controls were “hard to set up, put the onus back on parents and are very easy for teens to bypass.”


Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.


Thank you for your patience while we verify access.

Already a subscriber? Log in.

Want all of The Times? Subscribe.

Read Entire Article
Olahraga Sehat| | | |