
People who interact with chatbots for emotional support or other personal reasons are more likely to report symptoms of depression or anxiety, a new study finds.
The researchers from Mass General Brigham surveyed 20,847 mostly white men and women in the United States about their AI usage and mental health symptoms.
In the survey, published Wednesday in JAMA Network Open, 10.3% of participants reported using artificial intelligence “at least daily” and 5% reported using it “multiple times per day.” Of those using an AI program at least daily, nearly half were using it for work and about 11% used it for school. Among daily users, 87.1% reported using it for personal reasons, which could include recommendations, advice or emotional support.
Dr. Roy Perlis, a lead author of the study, said that most people’s exposure to artificial intelligence is through chatbots.
The mean age of the participants in the study was 47. Those who used chatbots daily for personal reasons were more likely to experience at least moderate depression or other feelings of anxiety and irritability, compared to people who didn’t use AI.
Participants were asked whether or how often in the past two weeks they had trouble concentrating, sleeping, eating or thought about hurting themselves. Common symptoms of depression include feelings of sadness, low self-esteem, lack of energy and lacking motivation.
Users with ages between 45 and 64 were more likely to report depressive symptoms with AI use.
Previous research has shown that some people turn to AI for emotional support and even romantic relationships. Early studies have shown that chatbots specifically designed for mental health treatment may be useful as an adjunct for therapy. Other studies analyzing general chatbots, such as OpenAI’s ChatGPT, said they may be problematic to people with mental health conditions.
However, the American Psychological Association advises against using AI as a replacement for therapy and psychological treatment.
Perlis said the average difference in depression severity between chatbot users and nonusers was small, but warned some people may struggle more severely than others.
“There’s probably a subset of people where AI use is associated with no change in their mood, or even benefit in their mood,” said Perlis, who serves as vice chair for research in the department of psychiatry at Mass General Brigham. “But that also means there are a subset where AI use is probably associated with worsening of their mood, and for some people, that can be substantially greater levels of depression.”
The researchers observed what’s called a “dose response,” meaning the more frequently someone used AI, the stronger their symptoms were.
Using AI for work or school wasn’t associated with symptoms of depression.
For people who use AI for personal reasons, Perlis said it can “run the gamut” on the nature of their interactions, and AI chatbots are a way of having a “social interaction that otherwise would be difficult for them.”
“It’s not the case at all that all AI is harmful and chatbots are harmful,” said Perlis, who is also an associate editor of JAMA Network Open. “I think my concern is particularly for these sort of general-purpose chatbots. They’re really not designed to take up people’s social support or mental health support, and so when we use them that way, I think there’s some risk.”
The survey has a number of limitations. It shows an association between AI and negative mental health symptoms, not cause and effect. The study also didn’t identify what specific AI programs participants were using, nor did it define what personal use meant.
‘A vicious cycle’
It also may be likely that people who are more depressed are more likely to turn to AI programs for companionship.
Dr. Jodi Halpern, co-director for the Kavli Center for Ethics, Science and the Public at UC Berkeley, noted the study doesn’t show that AI causes depression.
“It could go in either direction,” she said. “It could be a vicious cycle, we just have no idea. So the idea that when people are more depressed, they may use AI more for personal uses is very plausible.”
Nicholas Jacobson, associate professor of biomedical data science, psychiatry and computer science at Dartmouth College, said people may seek out AI for therapy if they’re unhappy with standard care and because it’s easier to access.
“There’s nowhere near enough providers to go around. And folks are looking for greater support than they can access otherwise,” he said.
The study also found that men, younger adults, higher earners, those with higher education and those in urban settings used AI more frequently.
Jacobson said it’s not certain why some people may be more likely to use AI, or are more negatively affected by it than others.
“We don’t know enough about this,” he said. “I think we need more studies to really understand why it is those groups in particular are more likely to use this, certainly.”
Halpern added that future research on AI should focus on its effects on people’s mental health, and this study “stretches our attention to look at the people we might not have been paying attention to.”
Perlis’ study isn’t a warning call, he said, but people should take into account their AI use and whether it’s helping them or not.
“[People] should be mindful when they’re interacting with a chatbot about how often they’re doing it, what they’re doing it instead of, and if they feel better or worse after they’ve had an extended interaction,” he said.








