Character.AI to Ban Children Under 18 From Using Its Chatbots

6 hours ago 6

You have a preview view of this article while we are checking your access. When we have confirmed access, the full article content will load.

The start-up, which creates A.I. companions, faces lawsuits from families who have accused Character.AI’s chatbots of leading teenagers to kill themselves.

A person in the foreground types at keyboard, looking at a large monitor.
A founder of Character.AI, Daniel De Freitas, demonstrating the app in 2022. The company said people under 18 would be barred from using its chatbots starting next month. Credit...Ian C. Bates for The New York Times

Natallie RochaKashmir Hill

Oct. 29, 2025Updated 9:16 a.m. ET

Character.AI said on Wednesday that it would bar people under 18 from using its chatbots starting late next month, in a sweeping move to address concerns over child safety.

The rule will take effect Nov. 25, the company said. To enforce it, Character.AI said, over the next month the company will identify which users are minors and put time limits on their use of the app. Once the measure begins, those users will not be able to converse with the company’s chatbots.

“We’re making a very bold step to say for teen users, chatbots are not the way for entertainment, but there are much better ways to serve them,” Karandeep Anand, Character.AI’s chief executive, said in an interview. He said the company also planned to establish an A.I. safety lab.

The moves follow mounting scrutiny over how chatbots sometimes called A.I. companions can affect users’ mental health. Last year, Character.AI was sued by the family of Sewell Setzer III, a 14-year-old in Florida who killed himself after constantly texting and conversing with one of Character.AI’s chatbots. His family accused the company of being responsible for his death.

The case became a lightning rod for how people can develop emotional attachments to chatbots, with potentially dangerous results. Character.AI has since faced other lawsuits over child safety. A.I. companies including the ChatGPT maker OpenAI have also come under scrutiny for their chatbots’ effects on people — especially youths — if they have sexually explicit or toxic conversations.

In September, OpenAI said it planned to introduce features intended to make its chatbot safer, including parental controls. This month, Sam Altman, OpenAI’s chief executive, posted on social media that the company had “been able to mitigate the serious mental health issues” and would relax some of its safety measures.


Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.


Thank you for your patience while we verify access.

Already a subscriber? Log in.

Want all of The Times? Subscribe.

Read Entire Article
Olahraga Sehat| | | |