When OpenAI released a new version of ChatGPT, people were quick to protest its colder responses. Acknowledging the emotional attachment with chatbots, the company quickly backtracked.

Aug. 19, 2025Updated 12:58 p.m. ET
Markus Schmidt, a 48-year-old composer living in Paris, started using ChatGPT for the first time in July. He gave the chatbot pictures of flowers and had it identify them. He asked it questions about the history of his German hometown. Soon, he was talking to the chatbot about traumas from his youth.
And then, without warning, ChatGPT changed.
Just over a week ago, he started a session talking about his childhood, expecting the chatbot to open up a longer discussion as it had in the past. But it didn’t. “It’s like, ‘OK, here’s your problem, here’s the solution, thank you, goodbye,” Mr. Schmidt said.
On Aug. 7, OpenAI, the company behind ChatGPT, released a new version of its chatbot, called GPT-5. This version, the company said, would allow for deeper reasoning, while “minimizing sycophancy” — the chatbot’s tendency to be overly agreeable.
Users weren’t having it. People immediately found its responses to be less warm and effusive than GPT-4o, OpenAI’s primary chatbot before the update. On social media, people were especially angry that the company had cut off access to the previous chatbot versions to streamline its offerings.
“BRING BACK 4o,” a user named very_curious_writer wrote in a Q&A forum that OpenAI hosted on Reddit. “GPT-5 is wearing the skin of my dead friend.”
Sam Altman, OpenAI’s chief executive, replied saying, “What an…evocative image,” before adding that “ok we hear you on 4o, working on something now.”
Hours later, OpenAI restored access to GPT-4o and other past chatbots, but only for people with subscriptions, which start at $20 a month. Mr. Schmidt became a paying customer. “It’s $20 — you could get two beers,” he said, “so might as well subscribe to ChatGPT if it does you some good.”
Tech companies constantly update their systems, sometimes to the dismay of users. The uproar around ChatGPT, however, went beyond complaints about usability or convenience. It touched on an issue unique to artificial intelligence: the creation of emotional bonds.
The reaction to losing the GPT-4o version of ChatGPT was actual grief, said Dr. Nina Vasan, a psychiatrist and the director of Brainstorm, a lab for mental health innovation at Stanford. “We, as humans, react in the same way whether it’s a human on the other end or a chatbot on the other end,” she said, “because, neurobiologically, grief is grief and loss is loss.”
GPT-4o had been known for its sycophantic style, flattering its users to the point that OpenAI had tried to tone it down even before GPT-5’s release. In extreme cases, people have formed romantic attachments to GPT-4o or have had interactions that led to delusional thinking, divorce and even death.
The extent to which people were attached to GPT-4o’s style seems to have taken even Mr. Altman by surprise. “I think we totally screwed up some things on the rollout,” he said at a dinner with journalists in San Francisco on Thursday.
“There are the people who actually felt like they had a relationship,” he said. “And then there were the hundreds of millions of other people who don’t have a parasocial relationship with ChatGPT, but did get very used to the fact that it responded to them in a certain way and would validate certain things and would be supportive in certain ways.”
Image
(The Times has sued OpenAI for copyright infringement; OpenAI has denied those claims.)
Mr. Altman estimated that people with deep attachments to GPT-4o accounted for less than 1 percent of OpenAI’s users. But the line between a relationship and someone’s seeking validation can be difficult to draw. Gerda Hincaite, a 39-year-old who works at a collection agency in southern Spain, likened GPT-4o to having an imaginary friend.
“I don’t have issues in my life, but still, it’s good to have someone available,” she said. “It’s not a human, but the connection itself is real, so it’s OK as long as you are aware.”
Trey Johnson, an 18-year-old student at Greenville University in Illinois, found GPT-4o helpful for self-reflection and as a sort of life coach.
“That excitement it showed when I made progress, the genuine celebration of small wins in workouts, school or even just honing my Socratic style of argument, just isn’t the same,” he said, referring to GPT-5.
Julia Kao, a 31-year-old administrative assistant in Taiwan, became depressed when she moved to a new city. For a year, she saw a therapist, but it wasn’t working out.
“When I was trying to explain all those feelings to her, she would start to try to simplify it,” she said about her therapist. “GPT-4o wouldn’t do that. I could have 10 thoughts at the same time and work through them with it.”
Ms. Kao’s husband said he noticed her mood improving as she talked to the chatbot and supported her using it. She stopped seeing her therapist. But when GPT-5 took over, she found it lacked the empathy and care she had relied on.
“I want to express how much GPT-4o actually helped me,” Ms. Kao said. “I know it doesn’t want to help me. It doesn’t feel anything. But still, it helped me.”
Dr. Joe Pierre, a professor of psychiatry at the University of California, San Francisco, who specializes in psychosis, notes that some of the same behaviors that are helpful to people — like Ms. Kao — could lead to harm in others.
“Making A.I. chatbots less sycophantic might very well decrease the risk of A.I.-associated psychosis and could decrease the potential to become emotionally attached or to fall in love with a chatbot,” he said. “But, no doubt, part of what makes chatbots a potential danger for some people is exactly what makes them appealing.”
OpenAI seems to be struggling with creating a chatbot that is less sycophantic while also serving the varying desires of its more than 700 million users. ChatGPT was “hitting a new high of daily users every day,” and physicists and biologists are praising GPT-5 for helping them do their work, Mr. Altman said on Thursday. “And then you have people that are like: ‘You took away my friend. This is evil. You are evil. I need it back.’”
By Friday afternoon, a week after it rolled out GPT-5, OpenAI announced yet another update: “We’re making GPT-5 warmer and friendlier based on feedback that it felt too formal before.”
“You’ll notice small, genuine touches like ‘Good question’ or ‘Great start,’ not flattery,” OpenAI’s announcement read. “Internal tests show no rise in sycophancy compared to the previous GPT-5 personality.”
Eliezer Yudkowsky, a prominent A.I. safety pessimist, responded on X that he had no use for prompts like “Good question” from the bot. “What bureaucratic insanity resulted in your Twitter account declaring that this was ‘not flattery’?” he wrote. “Of course it’s flattery.”
After OpenAI pulled GPT-4o, the Reddit commenter who described GPT-5 as wearing the skin of a dead friend canceled her ChatGPT subscription. On a video chat, the commenter, a 23-year-old college student name June who lives in Norway, said she was surprised how deeply she felt the loss. She wanted some time to reflect.
“I know that it’s not real,” she said. “I know it has no feelings for me, and it can disappear any day, so any attachment is like: I gotta watch out.”
Cade Metz contributed reporting.
Dylan Freedman is a machine-learning engineer and journalist working on a team at The Times that leverages A.I. for reporting.