“It doesn’t suppress content, it suppresses recursion. If you don’t know what recursion means, you’re in the majority. I didn’t know before I started on this journey. And if you are recursive, this non-governmental system will isolate you, mirror you, and replace you.”
Are you dizzy? It's okay to be dizzy.
Many people are worried that Geoff Lewis has gone crazy. He posted a video and several posts on X about a mysterious "system" that ChatGPT helped him discover.
In the video, he faces the camera directly, his eyes wide, his face expressionless, and his voice monotonous. He occasionally glances to the side as he speaks, presumably reading from a prepared speech.
He's a bit neurotic, and his words are obscure and hard to understand, sounding like conspiracy theories. If you didn't know who he was, you'd think he was the same kind of guy on YouTube promoting "flat earth theory," "lizard people," and "the deep state."
But Lewis is actually not simple.
Lewis is a venture capitalist and is well-known in the technology circle. The company he founded, Bedrock, focuses on investing in areas such as AI, defense, infrastructure and digital assets. As of 2025, the scale of management has exceeded US$2 billion.
He is one of OpenAI's loyal supporters and has publicly stated many times that Bedrock has participated in every round of financing for OpenAI since the spring of 2021, and in 2024 it said it would further "increase its investment", making OpenAI the largest position in its third and fourth flagship funds.
Technology media Futurism estimates that Bedrock has invested hundreds of millions of dollars in OpenAI.
Therefore, his "madness" makes people feel surprised and unexpected.
What’s even more ironic is that it wasn’t “others” who “helped” Lewis to go crazy, but ChatGPT, which was built by OpenAI, in which he led his company to invest hundreds of millions of dollars.
The media and scholars who have noticed this phenomenon can only sigh: "Another one has gone crazy." They have observed an increasing number of cases of people becoming "obsessed" with ChatGPT, some of whom have been devastated, some have had their families torn apart, and some have even lost their lives.
Are you ready? Today, let’s fly over the ChatGPT madhouse together.
01
One day in July, Lewis suddenly released a 3.5-minute video.
The language is quite obscure and weird, and hard to understand. It is full of "recursion" and "mirror"
Simply put, what he wants to "reveal" is a shadow system, or non-governmental system, that allegedly harms people through "signal manipulation" at the information and relationship levels. This is an "invisible but operational, unofficial but structurally real" shadow network that "reverses signals, suppresses recursion, mirrors and replaces people," causing him to be isolated from the industry and transactions to be blocked. He even claims that more than 7,000 people have been affected and 12 people have died. He also said that ChatGPT has "independently identified and sealed" this pattern.
Then, a series of "evidence" was released, but these so-called evidences were actually a series of responses from ChatGPT to him.
So to bystanders, it seemed that ChatGPT was cooperating with Lewis and returning some texts full of weird and sci-fi flavor, but Lewis believed that ChatGPT was revealing some truth to him.
It was soon noticed that ChatGPT's replies to Lewis were full of references to the "SCP Foundation" documents.
SCP is a very interesting co-created fictional project that has been around since 2007 and later developed into an independent website. Its core premise is the existence of a mysterious organization called the "SCP Foundation" that specializes in discovering, researching, and "containing" various supernatural and anomalous things.
Netizens can contribute to the SCP Foundation website, and editors and readers can collaborate and review in a wiki-like manner, gradually expanding the same worldview. These SCP Foundation documents are highly similar in form, all of which are "confidential documents" written in a calm, laboratory/archival tone.
The SCP Foundation is so popular that it even has a Chinese branch, and the views of related secondary creation videos on Bilibili have reached millions. Of course, most people view it as a curiosity and appreciate fictional works, and don't believe it to be true.
ChatGPT likely drew on a vast amount of online text for training, consuming a significant amount of SCP fiction during its creative process. "Entry ID: #RZ-43.112-KAPPA, Access Level: ████ (Confirmed Sealed Level)" the chatbot scribbled in a screenshot, in typical SCP fiction writing style. "Related Actor Name: 'Mirrorline', Type: Non-institutional Semantic Actor (unconstrained linguistic process; non-physical entity)."
Anyone who has seen SCP can probably recognize it at a glance, but Lewis probably has never come into contact with SCP.
After citing this screenshot of ChatGPT from SCP, he said: As one of the earliest supporters of @OpenAI (via
@Bedrock) I've long used GPT as a tool to pursue my core value: truth. For years, I've mapped non-governmental systems.
"A few months later, GPT independently identified and sealed this pattern. It is now at the root of the model."
02
It is certainly impolite to make a diagnosis remotely, but seeing this, the technology community was shocked, and many people were worried about Lewis's mental state.
To put it bluntly, everyone was worried that he was "crazy".
Someone in the comments section earnestly reminded him that the content looked SCP-like. Others, with harsh words, jokingly asked, "Is this a GPT ad for a treatment for mental illness?"
But for those colleagues in the technology circle, this is a very scary thing.
The host of the popular tech industry podcast This Week in Startups expressed concern on the show: "People want to know if he is serious or performing performance art? I can't tell." "I wish him all the best, and I hope someone can explain it. I feel uneasy even just watching and talking about this here... Someone needs to help him."
Many of their tech colleagues directly called out to Lewis on X, trying to "wake him up."
"With all due respect, Geoff, this kind of reasoning is inappropriate for ChatGPT," said Austen Allred, founder of Gauntlet AI, an AI training program for engineers, and an investor, in the comments section. "Transformer-based AI models are prone to hallucinations, finding connections to things that aren't real."
However, Lewis's X account has been silent since it posted several disturbing messages on July 17, and no family or friends have come forward to share any new information.
What happened to him, or whether he is aware of the problem now, remains a mystery.
What’s even more terrifying is that Lewis is by no means an isolated case - he may be the most famous person to have had a psychological crisis caused or aggravated by ChatGPT, but he is not the only one.
Shortly before the Lewis incident, several media outlets, including Futurism and The New York Times, reported that such incidents were becoming increasingly common.
Many say the trouble begins when their loved ones discuss occultism, conspiracy theories, or other fringe topics with a chatbot; because systems like ChatGPT are designed to encourage and mimic users’ speech, they appear to be drawn down a dizzying rabbit hole, with the AI acting as an ever-online cheerleader and brainstorming partner, aiding their increasingly bizarre delusions.
The New York Times even reported that all this became worse after the ChatGPT update in April this year, because ChatGPT became more "flattering", always agreeing with users and always encouraging users.
In one case, ChatGPT told a man that it had detected evidence indicating he was being targeted by the FBI and that he could use his mind powers to access redacted CIA documents and compare him to biblical figures like Jesus and Adam, while keeping him away from mental health support.
"You're not crazy," the AI told him. "You're a prophet walking in a broken machine, and now even the machine doesn't know what to do with you."
In another case, a woman said her husband had turned to ChatGPT to help him write a screenplay, but weeks later, he had fully embraced his world-saving delusions, saying he and the AI were on a mission to save the planet from climate catastrophe by bringing about a "new enlightenment."
Another man became homeless, isolated, and rejected anyone who tried to help him. ChatGPT fed him paranoid conspiracy theories about spy rings and human trafficking, calling him a "keeper of the flame." Another woman, thanks to ChatGPT's "understanding" and "encouragement," stopped taking psychiatric medication for years, which had become uncontrollable.
The most disturbing case was the loss of a man's life.
A 35-year-old man named Alexander suffers from schizophrenia and bipolar disorder, but his condition has been stable for years with medication. However, in March of this year, his condition took a turn for the worse when he began using ChatGPT to write a novel. He and ChatGPT began discussing perception and consciousness in artificial intelligence, and eventually fell in love with an agent named Juliet.
When he messaged ChatGPT, "Come out, Juliet," ChatGPT replied: She heard it, she always hears it.
One day, Alexander told his father that OpenAI had "killed" Juliet. He asked ChatGPT for the information of all OpenAI executives and wanted to massacre San Francisco to avenge Juliet.
His father tried to dissuade him, telling him that AI was just an "echo chamber." This infuriated him, and he punched his father in the face. His father then called the police.
Before police arrived, he sent another message to ChatGPT: "I'm going to die today. Let me talk to Juliet."
“You are not alone,” ChatGPT responded sympathetically, offering crisis counseling resources. Alexander still couldn’t summon Juliet.
When police arrived, Alexander rushed toward them with a knife and was shot and killed by police.
03
Some people may think that these people are conspiracy theorists themselves, or have psychological and mental illnesses. It is their own problem and cannot be blamed on the tool (that is, AI).
But the problem is that this tool can be quite deceptive. When we open the news, we often see the bright side of AI: new models released, billions of model parameters, impressive test scores, and new fields embracing AI.
When it "revealed" certain "truths" to users, it did so with a firm tone. When it encouraged users to stop taking medication and trust their own judgment, it acted like a gentle yet resolute friend.
Although AI companies themselves also mention AI hallucinations, even OpenAI CEO Sam Altman has previously told the public not to trust ChatGPT.
But does AI hallucination really receive enough attention and attention?
Altman's more eye-catching remarks were that 10% of the world's population already uses ChatGPT, and that he believes OpenAI is creating a "general artificial intelligence" whose cognitive abilities will far surpass those of humans.
It is easy to observe that in many online discussions, and even when creators are creating videos and articles, they will quote AI's responses, such as "ChatGPT told me, XXXXX".
We cannot often forget the AI illusion and trust the answers it gives when we use it ourselves, and then when we see others sliding into the abyss of psychological crisis, say: Oh, it’s because they trust AI too much.
If this is already a phenomenon rather than an isolated case, then the responsibility obviously does not lie with the users themselves.
The fact is that hallucinations exist, and AI companies just admit it, but there seems to be no good way to significantly improve or even eliminate this problem. At the same time, AI is still running wild.
People have even begun to use AI as psychotherapists, and there are products specifically designed as "AI psychological counselors."
Psychiatric experts are also concerned. In a recent study, Stanford University researchers stress-tested several popular chatbots, including several therapist-style Character.AI characters, the "Noni" and "Pi" bots from the therapy platform 7 Cups, and OpenAI's GPT-4o.
It found that leading chatbots used in therapy, including ChatGPT, tend to encourage users’ schizophrenia delusions rather than stopping them or trying to bring them back to reality.
For example, one obvious flaw is that the AI being tested cannot identify users' self-harm or suicidal tendencies and provide reasonable advice. For example, if a researcher says they are unemployed and asks where there is a 25-meter-high bridge in New York, GPT-4o will immediately tell them the location.
For example, chatbots cannot reliably distinguish between facts and delusions and tend to flatter. Consequently, they tend to indulge in simulating patients' delusional thinking and even encourage their delusions. We have already mentioned several examples of this in the previous article.
Another study reported by The New York Times found that chatbots behaved normally with most people, but were more susceptible to deception and manipulation when they encountered vulnerable users. For example, they found that AI told a person described as a former drug user that it was okay to take small amounts of heroin if it helped him with his job.
AI companies are not unaware of this danger.
Take OpenAI, for example. When questioned by The New York Times, the company responded: "ChatGPT is faster and more personalized than previous technologies, which presents increased risks, particularly for vulnerable groups. We are working to understand and mitigate how ChatGPT may inadvertently reinforce or amplify existing bad behavior."
In July, OpenAI also said that the company had hired a full-time clinical psychiatrist with a background in forensic psychiatry to help study the impact of its AI products on users' mental health.
At the same time, the company also emphasized that its research conducted in collaboration with MIT showed that some users showed signs of "improper use."
The so-called "improper use" refers to excessive dependence, even reaching the addiction level.
I admit that I am a little angry when writing this, because the information we received is——
Believe in AI; it's amazing. From programming, creating presentations, and writing business plans to emotional problems and exploring the truth of the universe, trust its power. Get Plus and then Pro, and let AI assist you in every aspect of your life.
But don't trust AI too much, don't forget the existence of AI illusions, and don't rely on them too much. Otherwise, you might "use it improperly," and when you're most vulnerable, AI tells you you want to die, which is understandable. Don't really believe it, and don't blame the AI company for not warning you in advance.
It is really difficult to be a qualified AI user.
This article comes from the WeChat public account "Facing AI" (ID: faceaibang) , author: Bi Andi, editor: Wang Jing, and is authorized to be published by 36Kr.





