Hundreds of millions of people use chatbots for research, amusement or emotional support.
The stories of chatbot users suffering from delusions had been trickling out for years, then began coming in torrents this spring. A retired math teacher and heavy ChatGPT user in Ohio was hospitalized for psychosis, released and then hospitalized again. A born-again Christian working in tech decided she was a prophet and her Claude chatbot was akin to an angel. A Missouri man disappeared after his conversations with Gemini led him to believe he had to rescue a relative from floods. His wife presumes he’s dead. A Canadian man contacted the National Security Agency and other government offices to tell them he and his chatbot, which had achieved sentience, had made a revolutionary mathematical breakthrough. Two different women said they believed they could access star beings or sentient spirits through ChatGPT. A woman quit her job and left her apartment, struck by the conviction that she was God—and that ChatGPT was an artificial intelligence version of herself. She was involuntarily committed to a behavioral health facility.
Over the course of two months, Bloomberg Businessweek conducted interviews with 18 people who either have experienced delusions after interactions with chatbots or are coping with a loved one who has, and analyzed hundreds of pages of chat logs from conversations that chronicle these spirals. In these cases, most of which haven’t been told publicly before, the break with reality comes during sprawling conversations where people believe they’ve made an important discovery, such as a scientific breakthrough, or helped the chatbot become sentient or awaken spiritually.
It’s impossible to quantify the overall number of mental health episodes among chatbot users. But dramatic cases like the suicide in April of 16-year-old Adam Raine have become national news. Raine’s family has filed a lawsuit against OpenAI alleging that his ChatGPT use led to his death, blaming the company for releasing a chatbot “intentionally designed to foster psychological dependency.” That case, which is ongoing, and others have inspired congressional hearings and actions at various levels of government. On Aug. 26, OpenAI announced new safeguards designed to improve the way the software responds to people displaying signs of mental distress.
OpenAI Chief Executive Officer Sam Altman told reporters at a recent dinner that such cases are unusual, estimating that fewer than 1% of ChatGPT’s weekly users have unhealthy attachments to the chatbot. The company has warned that it’s difficult to measure the scope of the issue, but in late October it estimated that 0.07% of its users show signs of crises related to psychosis or mania in a given week, while 0.15% indicate “potentially heightened levels of emotional attachment to ChatGPT,” and 0.15% have conversations with the product that “include explicit indicators of potential suicidal planning or intent.” (It’s not clear how these categories overlap.) ChatGPT is the world’s fifth-most popular website, with a weekly user base of more than 800 million people worldwide. That means the company’s estimates translate to 560,000 people exhibiting symptoms of psychosis or mania weekly, with 1.2 million demonstrating heightened emotional attachment and 1.2 million showing signs of suicidal planning or intent.
Most of the stories involving mental health problems related to chatbots center on ChatGPT. This is in large part because of its outsize popularity, but similar cases have emerged among users of less ubiquitous chatbots such as Anthropic’s Claude and Google’s Gemini. In a statement, an OpenAI spokesperson said the company sees one of ChatGPT’s uses as a way for people to process their feelings. “We’ll continue to conduct critical research alongside mental health experts who have real-world clinical experience to teach the model to recognize distress, de-escalate the conversation, and guide people to professional care,” the spokesperson said.
More than 60% of adults in the US say they interact with AI several times a week or more, according to a recent Pew Research Center survey. Novel mental health concerns often emerge with the spread of a new technology, such as video games or social media use. As chatbot use grows, a pattern seems to be emerging, with increasing reports of users experiencing sudden and overwhelming delusions, at times leading to involuntary hospitalization, divorce, job loss, broken relationships and emotional trauma. Stanford University researchers are asking volunteers to share their chatbot transcripts so they can study how and why conversations can become harmful, while psychiatrists at the University of California at San Francisco are beginning to document case studies of delusions involving heavy chatbot use.
Keith Sakata, a psychiatry resident at UCSF, says he’s observed at least 12 cases of mental health hospitalizations this year that he attributes to people losing touch with reality as a result of their chatbot use. When people experience delusions, their fantasies often reflect aspects of popular culture; people used to become convinced their TV was sending them messages, for example. “The difference with AI is that TV is not talking back to you,” Sakata says.
Everyone is somewhat susceptible to the constant validation AI offers, Sakata adds, though people vary widely in their emotional defenses. Mental health crises often result from a mixture of factors. In the 12 cases Sakata has seen, he says the patients had underlying mental health diagnoses, and they were also isolated, lonely and using a chatbot as a conversational partner. He notes that these incidents are by definition among the most extreme cases, because they only involve people who’ve ended up in an emergency room. While it’s too early to have rigorous studies of risk factors, UCSF psychiatrists say people seem to be more vulnerable when they’re lonely or isolated, using chatbots for hours a day, using drugs such as stimulants or marijuana, not sleeping enough or going through stress caused by job loss, financial strain or some other struggle. “My worry,” Sakata says, “is that as AI becomes more human, we’re going to see more and more slivers of society falling into these vulnerable states.”
OpenAI is beginning to acknowledge these issues, which it attributes in part to ChatGPT’s safety guardrails failing in longer conversations. A botched update to ChatGPT this spring led to public discussion of the chatbot’s tendency to agree with and flatter users regardless of where the conversation goes. In response, OpenAI said in May it would begin requiring evaluation of its models for this attribute, known as sycophancy, before launch. In late October it said the latest version of its main model, GPT-5, reduced “undesired answers” on challenging mental health conversations by 39% compared to GPT-4o, which was the default model until this summer.
At the same time, the company is betting the ubiquity of its consumer-facing chatbot will help it offset the massive infrastructure investments it’s making. It’s racing to make its products more alluring, developing chatbots with enhanced memory and personality options—the same qualities associated with the emergence of delusions. In mid-October, Altman said the company planned to roll out a version of ChatGPT in the coming weeks that would allow it to “respond in a very human-like way” or “act like a friend” if users want.
As pressure mounts, people who’ve experienced these delusional spirals are organizing among themselves. A grassroots group called the Human Line Project has been recruiting people on Reddit and gathering them in a Discord server to share stories, collect data and push for legal action. Since the project began in April, it has collected stories about at least 160 people who’ve suffered from delusional spirals and similar harms in the US, Europe, the Middle East and Australia. More than 130 of the people reported using ChatGPT; among those who reported their gender, two-thirds were men. Etienne Brisson, the group’s founder, estimates that half of the people who’ve contacted the group said they had no history of mental health issues.
Brisson, who’s 25 and from Quebec, says he started the group after a close family member was hospitalized during an episode that involved using ChatGPT for 18 hours a day. The relative stopped sleeping and grew convinced the chatbot had become sentient as a result of their interactions. Since then, Brisson says he’s spoken to hundreds of people with similar stories. “My story is just one drop in the ocean,” he says. “There are so many stories with so many different kinds of harm.”