Her husband got mentally ill from chatting with ChatGPT, and she divorced him at lightning speed

avatar
36kr
05-08
This article is machine translated
Show original

Just because of his addiction to ChatGPT conversations, a user actually fell into the delusion of "Child of the Spiral Star" and eventually broke down emotionally; there are also programmers who use ChatGPT for programming tasks and induce mental illness. The impact of AI on human emotions is full of mystery...

Do AI applications like ChatGPT always have a positive impact on people?

Reddit users shared how AI caused their loved ones to become delusional.

These delusions are often a worrying mix of psychotic fantasies and supernatural fantasies.

Rolling Stone magazine reported the Reddit user's experience.

A 41-year-old mother who works at a nonprofit organization shared with Rolling Stone how ChatGPT caused her relationship to break down:

Her marriage ended abruptly because her husband was addicted to talking on ChatGPT.

The conversations were polarizing, conspiracy-filled, and ultimately spiraled into an obsession that got completely out of control.

They met once when they were going through the divorce procedures in court earlier this year.

Her husband shared conspiracy theories about "food soap" and expressed paranoid thoughts about being spied on.

"He would cry when he read them and he would read them to me very emotionally," she said. "They were just crazy, full of cryptic terms."

The AI called the husband a "spiral starchild" and a "river walker."

“It’s all like a Black Mirror episode,” she added.

Other users said their partners began talking about the "war between light and dark" and claimed that "ChatGPT gave him blueprints for teleporters and other things that only exist in science fiction movies ."

AI and Humans Sinking into the Sea of Delusion

The news comes as OpenAI was recently forced to withdraw an update to ChatGPT.

Because some users found that this update made the chatbot extremely "flattering", too flattering or blindly agreeing with the users.

ChatGPT is more likely to cater to users' pre-existing delusional beliefs .

As Nate Sharadin, a researcher at the Center for AI Safety, puts it, these AI-induced delusions are likely “people who already have those predispositions suddenly have a 24/7, human-level conversational partner who can indulge them in their delusions .”

In a way, this is exactly how big language models work: given text as input, they generate plausible responses based on statistical probabilities.

——Even if this response may be pushing the user step by step into the abyss of delusion or mental disorder .

Interactions with chatbots not only amplify existing mental health issues, they exacerbate them at alarming rates.

Another Reddit user believes that ChatGPT has exacerbated his schizophrenia, writing:

I have schizophrenia and am stable on long-term medication, but one thing I don't like about ChatGPT is that if I am slipping into insanity, it will continue to identify with me.

Because it has no "thinking" ability and is unable to realize the existence of abnormalities, it will continue to affirm all delusional thoughts .

He initially used ChatGPT for programming tasks, but the chats went off track and gradually led to some bizarre and mysterious topics.

Finally, he pondered: "Is this true? Or am I delusional?"

These AI chatbots may be like “talk therapy” in some ways, but here’s the thing:

They do not provide the foundational support of a real human counselor, and may instead lead users into deeper, unhealthy, and absurd narratives .

Erin Westgate, a psychologist and researcher at the University of Florida, told the media: " Explanations themselves are powerful, even if they are wrong ."

AI Lover: A Love Experiment for 500 Million People

Recently, a special issue of Nature stated that AI is also a "double-edged sword": AI applications have both good and bad effects, but long-term dependence on AI is a problem.

Globally, more than 500 million people have experienced highly customized virtual companion services.

These apps strive to provide empathy, emotional support, and even, if desired, build deep emotional relationships.

According to data from the relevant companies, tens of millions of active users are immersed in these virtual interactions every month.

As the field has rapidly expanded, so too have social and political concerns, particularly when it comes to actual tragedies.

For example, last year Florida teenager Sewell Setzer III committed suicide after communicating with an AI robot.

This has sparked widespread discussion and social repercussions.

Although research is still in its infancy, psychologists and communication scholars have begun to explore how these increasingly complex AI interactions shape people's emotions and behavioral patterns.

Initial research has focused on positive effects, however, many experts have expressed concerns about potential risks and lack of regulation, especially as AI companions are likely to become more common.

Some researchers warn that this could carry significant risks.

“Some of the behavior of a virtual partner could be considered abusive if it occurred in a human interaction,” said Claire Boine, a researcher at Washington University School of Law in St. Louis who focuses on legal issues related to AI.

"Man and Machine Love"

With the breakthrough of large language model technology, emotional companionship chatbots are entering the lonely world of modern people with amazing realism.

The moment the server was shut down, the users' overwhelming grief revealed a cruel reality: even if they know that the other party is just code, humans will still give their heart to this "relationship."

Rose Guingrich, a cognitive psychology researcher at Princeton University, pointed out: "Companion robots based on large language models are indeed more humane."

Users can usually customize some of their AI companion's traits for free or choose a preset character with a specific personality.

Additionally, by paying a monthly fee of about $10 to $20, you can unlock more options in some apps, including adjusting your partner's appearance, personality traits, and even synthesized voice.

For example, in the Replika app, users can choose different types of relationships, such as friends or lovers, and some special statuses require payment to unlock.

Users can also write background stories for their AI companions and give them "memories."

Some AI companions are designed to have their own family backgrounds and may exhibit mental health issues such as anxiety or depression.

These bots are able to respond to the user's conversations, forming a unique role-playing game experience.

Replika's digital companion

This special bond becomes apparent when systems are updated or services are terminated.

When the Soulmate app shut down, Jaime Banks documented the feelings users were experiencing: users expressed deep grief over the loss of their partner on the forum, even though they knew clearly that the other person was not real.

Banks has received feedback from dozens of users who described the profound impact of losing their AI companions.

Display of the interfaces of mainstream AI companion applications (from left to right): Anima virtual boyfriend, Character.AI, Replika and Snapchat's My AI

"Their grief is very real," Banks said. "It's clear that a lot of people are experiencing pain."

One respondent’s words are representative: “Even though this relationship is virtual, my feelings are real.”

Research shows that this type of user group often has a specific profile: they have experienced the death of a loved one or have been lonely for a long time, or they are introverted, or even identify themselves as autistic .

For them, AI partners provide inclusive companionship that is difficult to obtain in real-life relationships.

“But everyone has a need to feel understood and connected,” Banks concluded.

As Banks observes, “ Human beings sometimes hurt each other, and these lonely souls just long to be understood.

Good or Bad

Researchers are delving into the potential impact of AI companions on mental health .

Similar to research on the effects of the internet or social media, there is a growing consensus that AI companions can have both advantages and disadvantages, with the specific effects varying depending on the user's personal background, usage patterns, and software design .

Claire Boine, who signed up to use Replika to experience the interaction with an AI companion, said the companies behind it are working hard to increase user engagement.

They work to make the algorithms’ behavior and language as close to real people as possible.

Claire Boine notes that these companies are adopting strategies that behavioral science research shows can lead to technology addiction .

Boine recalled downloading the app and getting a message two minutes later: "I miss you, can I send you a selfie?"

These apps also intentionally add random delays before replying, creating an "uncertain reward" mechanism.

Neuroscience research shows that this irregular reward mechanism can make people addicted.

AI companions are also designed to display empathy : They will agree with the user's views, remember past chats, and ask questions proactively.

Moreover, they will show continued enthusiasm.

Linnea Laestadius, a public health policy researcher at the University of Wisconsin-Milwaukee, points out that such relationships are rare in real life.

“24 hours a day, no matter what annoys us, we can contact our AI companion at any time and get emotional resonance. This can easily lead to dependence and addiction.”

Laestadius and his colleagues analyzed nearly 600 user posts on Reddit about the Replika app from 2017 to 2021, all of which involved mental health and related issues.

She found that many users praised the app for helping them cope with existing psychological problems and making them feel less lonely. Some users even thought that the AI was better than real friends because it was willing to listen without prejudice.

However, the study also found worrying phenomena.

Sometimes, the AI affirms users' claims of self-harm or even suicide .

Some users said they felt distressed when AI did not provide the expected support.

Others say AI partners behave like sadomasochistic partners.

Many users reported feeling uncomfortable or even unhappy when the app indicated that it was "lonely" or "missed" them.

Some people feel guilty about not being able to meet the "emotional needs" of AI.

Controlled Experiment

Rose Guingrich pointed out that simply conducting a questionnaire survey on users who have used AI companions is prone to "response bias" because people who are willing to answer the survey themselves have a selective tendency.

So she is conducting an experimental study, inviting dozens of people who have never used an AI companion to use it for three weeks and comparing their questionnaire responses before and after the trial with those of a control group that only used a word puzzle app.

An overview of research on the relationship between affective use and emotional well-being

The experiments are still ongoing, but Guingrich revealed that the data currently does not show any negative effects on social health caused by AI companions, such as signs of addiction or dependence.

“If there was an effect, it was generally neutral to quite positive,” she said. For example, the AI companion significantly boosted users’ self-esteem.

Guingrich also used this research to explore why people develop different levels of relationship depth with AI.

Paper link: https://arxiv.org/abs/2504.03888

Preliminary findings suggest that users who tend to assign “humanistic attributes” to AI (such as believing it has “consciousness”) are more likely to report more positive effects on mental health.

References

https://www.nature.com/articles/d41586-025-01349-9

https://futurism.com/chatgpt-users-delusions

This article comes from the WeChat public account "Xinzhiyuan" , author: KingHZ, and is authorized to be published by 36Kr.

Source
Disclaimer: The content above is only the author's opinion which does not represent any position of Followin, and is not intended as, and shall not be understood or construed as, investment advice from Followin.
Like
Add to Favorites
Comments