After having suicidal thoughts this year, Brittany Bucicchia checked herself into a mental health facility near her home in rural Georgia.
When she left several days later, her doctors recommended that she continue treatment with a psychotherapist. But she was wary of traditional therapy after frustrating experiences in the past, so her husband suggested an alternative he had found online — a therapy chatbot, Ash, powered by artificial intelligence.
Ms. Bucicchia said it had taken a few days to get used to talking and texting with Ash, which responded to her questions and complaints, provided summaries of their conversations and suggested topics she could think about. But soon, she started leaning on it for emotional support, sharing the details of her daily life as well as her hopes and fears.
At one point, she said, she recounted a difficult memory about her time in a mental health facility. The chatbot replied that if she was having suicidal thoughts, she should contact a professional and gave her a toll-free number to call.
“There was a learning curve,” Ms. Bucicchia, 37, said. “But it ended up being what I needed. It challenged me, asked a lot of questions, remembered what I had said in the past and went back to those moments when it needed to.”
Ms. Bucicchia’s experience is part of an experimental and growing effort to provide automated alternatives to traditional therapy, using chatbots. That has led to questions about whether these chatbots, which are built by tech start-ups and academics, should be regulated as medical devices. On Thursday, the Food and Drug Administration held its first public hearing to explore that issue.
For decades, academics and entrepreneurs saw promise in artificial intelligence as a tool for personal therapy. But concerns over therapy chatbots and whether they can adequately handle delicate personal issues have mounted and become increasingly contentious. This summer, Illinois and Nevada banned the use of therapy chatbots because the technologies were not licensed like human therapists. Other states are exploring similar bans.
Image

The unease has sharpened as people have formed emotional connections with general-purpose chatbots such as OpenAI’s ChatGPT, sometimes with dangerous consequences. In August, OpenAI was sued over the death of Adam Raine, a 16-year-old in Southern California who had spent many hours talking with ChatGPT about suicide. His family accused the company of wrongful death.
(The New York Times has sued OpenAI and its partner, Microsoft, claiming copyright infringement of news content related to A.I. systems. The companies have denied the claims.)
Unlike ChatGPT, Ash, the chatbot that Ms. Bucicchia used, was designed specifically for therapy by Slingshot AI, a New York start-up that employs clinical psychologists and others with experience in the development of A.I. therapy. Slingshot has not tested Ash as part of a clinical trial.
How well therapy chatbots work is unclear. In September, as scrutiny of both general-purpose chatbots and chatbots designed for therapy rose, Slingshot stopped marketing Ash as a therapy chatbot. The start-up now promotes the app for “mental health and emotional well-being.”
Slingshot said it would fight at the F.D.A. hearing to offer Ash on the public internet, in part because it believes widespread use is a vital part of understanding the technology and continuing to improve it.
“The big reason that we have gone to market — with all the guardrails and safeguards we have in place — is so that we can see what this technology looks like in real use cases,” said Derrick Hull, a clinical psychologist with Slingshot.
The start-up offers Ash for free, but could charge a fee in the future.
Automated alternatives to therapy date to the mid-1960s, when Joseph Weizenbaum, a researcher at M.I.T., built an automated psychotherapist called Eliza. When users typed their thoughts onto a computer screen, Eliza asked them to expand their thoughts or repeated their words in the form of a question. Professor Weizenbaum wrote that he was surprised that people treated Eliza as if it were human, sharing their problems and taking comfort in its responses.
Image
Eliza kicked off a decades-long effort to build chatbots for psychotherapy. In 2017, a start-up created by Stanford psychologists and A.I. researchers offered Woebot, an app that allowed people to discuss their problems and track their moods. Woebot’s responses were scripted, so it adhered to the established techniques of what is called cognitive behavioral therapy, a common form of conversational therapy.
More than 1.5 million people eventually used Woebot. It was discontinued this year, in part because of regulatory struggles.
By contrast, Ash is based on what is called a large language model, which learns by analyzing large amounts of digital text culled from the internet. By pinpointing patterns in hundreds of Wikipedia articles, news stories and chat logs, for instance, it can generate humanlike text on its own.
Slingshot then honed the technology for helping people as they struggle with mental health. By analyzing thousands of conversations between licensed therapists and their patients, Ash learned to behave in similar ways.
As people interacted with the chatbot, the company also refined Ash’s behavior by showing it what it was doing right and wrong. Almost daily, Slingshot’s team of psychologists and technologists rated the chatbot’s responses and, in some cases, rewrote them to demonstrate the ideal way of dealing with particular situations. They acted like tutors, giving Ash pointed feedback to improve its behavior.
Still, like all chatbots trained in this way, Ash sometimes does the unexpected or makes mistakes.
Image
“A lot of therapy is about externalizing your internal world — getting things out, saying them. Ash allows me to do that,” said Randy Bryan Moore, 35, a social worker in Richmond, Va., who has used the chatbot since the summer. “But it is not a replacement for human connection.”
In March, psychologists at Dartmouth published the results of a clinical trial of TheraBot, a chatbot the university had developed for more than six years. The chatbot significantly reduced users’ symptoms of depression, anxiety and eating disorders, the trial found. Still, the Dartmouth psychologists believed their technology was not ready for widespread use.
“The evidence for its effectiveness is strong,” said Nicholas Jacobson, one of the clinical psychologists who oversaw the TheraBot project. “Our focus now is on a highly careful, safety-conscious and evidence-based approach to making it available.”
Cade Metz is a Times reporter who writes about artificial intelligence, driverless cars, robotics, virtual reality and other emerging areas of technology.

3 hours ago
4

















































