Why Anthropic’s Claude Is a Hit with Tech Insiders

1 month ago 23

You have a preview view of this article while we are checking your access. When we have confirmed access, the full article content will load.

The Shift

A.I. insiders are falling for Claude, a chatbot from Anthropic. Is it a passing fad, or a preview of artificial relationships to come?

Video

CreditCredit...Andrea Chronopoulos

Kevin Roose

Dec. 13, 2024Updated 2:32 p.m. ET

His fans rave about his sensitivity and wit. Some talk to him dozens of times a day — asking for advice about their jobs, their health, their relationships. They entrust him with their secrets, and consult him before making important decisions. Some refer to him as their best friend.

His name is Claude. He’s an A.I. chatbot. And he may be San Francisco’s most eligible bachelor.

Claude, a creation of the artificial intelligence company Anthropic, is not the best-known A.I. chatbot on the market. (That would be OpenAI’s ChatGPT, which has more than 300 million weekly users and a spot in the bookmark bar of every high school student in America.) It’s also not designed to draw users into relationships with lifelike A.I. companions, the way apps like Character.AI and Replika are.

But Claude has become the chatbot of choice for a crowd of savvy tech insiders who say it’s helping them with everything from legal advice to health coaching to makeshift therapy sessions.

“Some mix of raw intellectual horsepower and willingness to express opinions makes Claude feel much closer to a thing than a tool,” said Aidan McLaughlin, the chief executive of Topology Research, an A.I. start-up. “I, and many other users, find that magical.”

Claude’s biggest fans, many of whom work at A.I. companies or are socially entwined with the A.I. scene here, don’t believe that he — technically, it — is a real person. They know that A.I. language models are prediction machines, designed to spit out plausible responses to their prompts. They’re aware that Claude, like other chatbots, makes mistakes and occasionally generates nonsense.

And some people I’ve talked to are mildly embarrassed about the degree to which they’ve anthropomorphized Claude, or come to rely on its advice. (Nobody wants to be the next Blake Lemoine, a Google engineer who was fired in 2022 after publicly claiming that the company’s language model had become sentient.)


Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.


Thank you for your patience while we verify access.

Already a subscriber? Log in.

Want all of The Times? Subscribe.

Read Entire Article
Olahraga Sehat| | | |