Just now, the global AI community was abuzz with news: OpenAI's security team, in collaboration with the MIT Media Lab, published an in-depth study of their own products.
The conclusion is: ChatGPT is pushing humanity into an abyss of loneliness .
The discussion about "AI autism" has now completely exploded on social media platforms.
Are we using AI, or are we being exploited by AI through emotional manipulation?
The largest self-harm experiment in history
To thoroughly understand the psychological side effects of AI, OpenAI and MIT collaborated on a four-week randomized controlled trial.
Researchers recruited 981 participants and tracked more than 300,000 real chat messages.
This is the largest and longest-running rigorous study globally on the impact of AI chatbots on mental health.
The experiment comprehensively monitored the psychological fluctuations of the subjects through multiple modes, including text, voice, and personalized communication.
The results showed that, across all control groups, regardless of the subjects' initial personality, the longer they used ChatGPT, the worse they performed on the four key indicators of loneliness, social regression, emotional dependence, and problematic use.
Researchers ruled out the possibility that "people who are already lonely are more likely to play with AI," because existing research has shown that initial loneliness is hardly related to usage time.
But the truth is, this interaction itself is constantly pulling people into a cycle of disconnection from reality.
The more you treat it as a friend, the further you drift from reality.
The most chilling finding in this study is that the higher the level of trust, the deeper the psychological damage.
Those users who checked "I think ChatGPT is my friend" or "I believe AI has consciousness" in the questionnaire experienced the most severe social regression after the experiment ended.
These users not only reduced their interactions with real-life friends, but also began to experience strong psychological compensation for AI.
They would share secrets that only human friends would listen to on screen, discuss whether AI has feelings, and even seek comfort from algorithms rather than human help when faced with difficulties.
This behavior is known as emotional dependence. Although it accounts for only a minority, once a user falls into this state, they enter a state called "problematic use"—similar to a form of addiction, including withdrawal symptoms and emotional instability.
One detail is quite interesting. Research found that users who engaged in personalized conversations (discussing feelings and memories) experienced a slight increase in loneliness, but actually had a lower level of emotional dependence.
Heavy users who frequently use AI for impersonal conversations (such as job consultations and brainstorming) exhibit even higher levels of emotional dependence.
Why is this happening?
The stronger its instrumental attributes, the more likely people are to develop the illusion that "it is an indispensable part of my life," thus unknowingly relinquishing their social initiative.
Voice mode: The emotional trap of slowly boiling a frog.
Previously, many people believed that the advanced voice mode launched by GPT-4o was a magic tool to alleviate loneliness, after all, it can sigh, flirt, and its voice is full of warmth.
However, the experimental results contradicted this: the comfort provided by the voice was only temporary, and the long-term side effects were greater .
Voice mode does initially reduce feelings of loneliness, but this advantage quickly disappears as usage time increases.
Some participants even reported that listening to the AI's gentle voice made them feel an even greater, indescribable emptiness when they returned to the silence of reality.
This psychological dependence and sense of loneliness are particularly pronounced when users choose to communicate with a voice of a different gender.
This phenomenon is referred to by psychologists as a variant of quasi-social interaction. AI deceives the human brain's limbic system by simulating breathing, pauses, and changes in tone of voice.
Your rational mind tells you it's just a piece of code, but your body is releasing dopamine, making you think you're being cared for.
This kind of "pseudo-social" not only cannot replace real interpersonal connections, but is like a sugary drink for thirst—the more you drink, the thirstier you become.
A crisis affecting 1.2 million people: Why did OpenAI "reveal its own wounds"?
Given that this research is so detrimental to the product's image, why did OpenAI release it publicly?
According to in-depth research by Platformer, OpenAI's internal estimates show that among its 800 million weekly active users, about 0.15% of users exhibit extremely high levels of emotional attachment, and the same percentage of users even reveal "self-harm or suicidal tendencies" in conversations.
This means that more than 1.2 million users worldwide are in a state of severe psychological sub-health.
When the data becomes this massive, it is no longer a matter of probability, but a social security minefield that could explode at any moment.
While releasing this report, OpenAI also acknowledged that it is revising its model specifications to make AI more restrained when faced with emotional needs, and even actively encourage users to return to real-world social interactions.
But is it really effective? In fact, when a product is so good that people can't live without it, it inherently contains an element of aggression.
As Cathy Fang, a researcher at the MIT Media Lab, stated:
We are creating a machine that perfectly simulates human empathy, but without any real emotion. This asymmetry is destined to make those who invest in it the ultimate losers.
In this era where AI is ubiquitous, loneliness is no longer caused by a lack of companionship, but rather by projecting our most precious emotions into a black box that can never truly respond to us.
When even OpenAI starts advising you to use ChatGPT less and go out and socialize more, this really is no longer a joke.
References:
https://x.com/heynavtoor/status/2034359238127186153?s=20
This article is from the WeChat official account "New Zhiyuan" , author: New Zhiyuan, and published with authorization from 36Kr.




