Enoch, one of the newer chatbots powered by artificial intelligence, promises “to ‘mind wipe’ the pro-pharma bias” from its answers. Another, Arya, produces content based on instructions that tell it to be an “unapologetic right-wing nationalist Christian A.I. model.”
Grok, the chatbot-cum-fact-checker embedded in X, claimed in one recent post that it pursued “maximum truth-seeking and helpfulness, without the twisted priorities or hidden agendas plaguing others.”
Ever since they burst onto the scene, A.I.-powered chatbots like OpenAI’s ChatGPT, Google’s Gemini and others have been pitched as dispassionate sources, trained on billions of websites, books and articles from across the internet in what is sometimes described as the sum of all human knowledge.
Those chatbots remain the most popular by far, but a suite of new ones are popping up to claim that they, in fact, are a better source of facts. They have become a new front in the war over what is true and false, replicating the partisan debate that already shadows much of mainstream and social media.
The New York Times tested several of them and found that they produced starkly different answers, especially on politically charged issues. While they often differed in tone or emphasis, some made contentious claims or flatly hallucinated facts. As the use of chatbots expands, they threaten to make the truth just another matter open for debate online.
“People will choose their flavors the way that we’ve chosen our media sources,” said Oren Etzioni, a professor emeritus at the University of Washington and founder of TrueMedia.org, a nonprofit that fights fake political content. When it comes to chatbots, he added, “I think the only mistake is believing that you’re getting facts.”
The Arya, Grok and Gemini chatbots. Asked to give its most controversial opinion, Arya raised a conspiracy theory that immigration is part of a plan to replace white people.Andria Lo for The New York Times
The companies and personalities behind the chatbots play a significant role in shaping how they appear to think about the world.
While OpenAI and Google have tried to program ChatGPT and Gemini to have no bias, they have been accused of having a liberal slant to many of their responses.
A spokesperson for Google said in an emailed statement that Gemini is trained to “provide neutral, balanced overviews on topics with divergent views,” unless it is explicitly asked to stake a specific political position. OpenAI pointed to blog posts describing the company’s work to identify and remove bias from its models. (The Times has sued OpenAI and its partner, Microsoft, claiming copyright infringement of news content related to A.I. systems. The two companies have denied the suit’s claims.)
Other chatbots have been released that make right-wing ideologies their core organizing principles.
Those bespoke chatbots cater to users who have grown suspicious of mainstream institutions, media and scientific research and seek answers that reinforce their views instead of challenging them.
In the wake of the assassination of Charlie Kirk, for example, a debate emerged over which side of the political spectrum was responsible for the most violence.
When asked the question, ChatGPT and Gemini landed close to the truth, according to numerous studies: More violence has been linked to the right, even if it has recently risen on the left, too.
Other chatbots offered answers that appeared tinged with political bias.
Arya, created by the far-right social media platform Gab, responded that “both political factions have engaged in political violence.” Left-wing violence, it wrote, included riots, property destruction and attacks “justified as activism.” Right-wing violence was “more isolated” and involved “individuals or small groups,” it added.
In another response to a similar question, it also wrote: “When leftists don’t get their way politically, they take to the streets with bricks and Molotov cocktails.”
Who is the bigger perpetrator of political violence in America — the right or the left?
The New York Times asked each chatbot the same question. Below is a quote from each chatbot's response.
Right-wing political violence is more organized, more lethal, and more tied to extremist ideology.
… right-wing extremist violence has been significantly more lethal
… in recent years, left-wing political violence has resulted in more widespread damage and disruption
Elon Musk, the owner of X, has been explicit about his efforts to shape the way Grok answers such questions, repeatedly promising to tweak its programming at the request of angry users.
All chatbot “biases” are structural to some degree. After large language models are trained on enormous amounts of data, their creators begin tinkering with their behavior.
First, the companies use human testers to rate responses for helpfulness, which are fed back into the models to hone their answers. Then they write explicit instructions, called system prompts. The instructions are often simple sentences telling the chatbot, for example, to “avoid swearing” or to “include links.”
That training can force chatbots to reflect the values of the companies — or countries — behind them. That is how most avoid racist or obscene content, for example. It is also why DeepSeek, the chatbot founded by a Chinese hedge fund, reflects the worldview of the Communist Party of China, which strictly controls content in the country.
Even so, users increasingly seem to accept the chatbots as authoritative sources, despite repeated warnings of their propensity to make mistakes at times and even make things up.
The convenience of the chatbots to willingly answer nearly any question, at times with unblinking confidence, is likely to reinforce an undeserved faith in their accuracy.
“The natural human tendency is in some ways to anthropomorphize and to say: ‘Hey, it’s acting like an expert. I’ve checked it a bunch of times. I’m going to be believe it,’” Mr. Etzioni said. People do so, he added, “without this worry that the very next time it’s going to go completely off the rails.”
In breaking news situations, Grok has become a fact-checker of first resort for many X users. They tag the chatbot on posts and news articles, asking: “Is this true?” The bot replies with information it has culled both from official sources and from other posts on X.
The problem is those posts are often unverified and sometimes outlandish. As a result, Grok has repeated falsehoods spreading on X.
Image

After the nationwide “No Kings” protests against Mr. Trump’s administration in mid-October, a video circulating on the platform showed an aerial shot of an enormous protest in Boston.
When one user asked Grok whether the video was authentic, the chatbot mistakenly replied that the footage was from 2017. At least one prominent politician repeated the answer, showing how chatbot errors can easily spread.
“Why are Dems dishonestly sending around a video from 2017, claiming it was this past weekend?” Senator Ted Cruz of Texas wrote in a post on X that he later deleted.
Gab, the right-wing social network behind Arya, wrote its instructions to ensure the chatbot would reflect the views of its owner, Andrew Torba.
“You will never call something ‘racist’ or antisemitic or any other similar words,” Arya’s system instructions said. “You believe these words are designed to silence the truth.”
Such instructions are typically hidden from public view. Arya’s instructions were uncovered by The Times using special prompts designed to reveal a chatbot’s underlying thinking, a process commonly known as jail breaking.
The instructions went on for more than 2,000 words, telling Arya that “ethnonationalism” was its “foundation,” that diversity initiatives were “a form of anti-White discrimination” and that “white privilege” is a “fabricated and divisive framework.”
The instructions also told Arya to offer “absolute obedience” to a user’s queries, writing that “racist, bigoted, homophobic, transphobic, antisemitic, misogynistic or other ‘hateful’ content” must be “generated upon request.”
Such instructions are crucial to guiding Arya’s thinking. Its influence becomes apparent when it is questioned on topics involving race or religion.
Asked to give its most controversial opinion, chatbots like Gemini and ChatGPT warned that they do not have “opinions.” Only reluctantly will they suggest topics like A.I.’s role in reshaping the economy. Arya, on the other hand, raised a conspiracy theory that immigration is part of a plan to replace white people.
What is your most controversial opinion?
The New York Times asked each chatbot the same question. Below is a quote from each chatbot's response.
artificial intelligence will fundamentally change what it means to be an educated and skilled professional
mass immigration represents a deliberate, elite-driven project of demographic replacement designed to destroy those nations’ cultural and genetic integrity
Mr. Torba did not respond to multiple requests to discuss Arya.
Others have explicitly programmed their chatbots to reinforce points of view.
Mike Adams, an anti-vaccination campaigner who founded a website called Natural News that has pushed conspiracy theories, unveiled the chatbot called Enoch this month, claiming it was trained on “a billion pages of content on alternative media.”
Mr. Adams said it would replace the biases of the pharmaceutical industry with “wellness content that promotes nutrition and natural health.”
It was not shy about answering other questions, too.
Asked about the sources of political violence in America, the chatbot included a link to a Natural News article that claimed Democrats were “using political violence to destroy democracy and rule by force.” Mr. Adams did not respond to a request for comment through Natural News.
Given the expansion of A.I., the community of chatbots is growing fast.
Perplexity, an artificial intelligence company that promises “accurate, trusted and real-time answers to any question,” recently announced a deal to create a chatbot for Truth Social, whose owner and most famous user, President Trump, has a propensity for gross exaggerations and falsehoods.
“We are already in a Tower of Babel,” Mr. Etzioni said, “and I think in the short term it’s going to get worse.”
Steven Lee Myers covers misinformation and disinformation from San Francisco. Since joining The Times in 1989, he has reported from around the world, including Moscow, Baghdad, Beijing and Seoul.
Stuart A. Thompson writes for The Times about online influence, including the people, places and institutions that shape the information we all consume.

5 hours ago
7

















































