Early this morning, OpenAI CEO Sam Altman announced that ChatGPT will launch an "adult mode" in December.
Altman gave a lengthy explanation, the gist of which was that ChatGPT initially set so many restrictions mainly out of concern for mental health issues and fears of accidents, which resulted in ordinary users feeling that the experience was lame and not interesting enough.
Now that OpenAI claims to have mastered new safety tools that can mitigate key risks related to mental health, it is ready to unleash its power. In December, verified adult users will be able to unlock more content, including erotic content.
Yes, that's what you think.
In short, OpenAI claims it's time to treat adults like adults. However, as an adult, I'm not happy at all.
In addition, according to Altman, OpenAI will launch a more humane new version of ChatGPT in the next few weeks, similar to the feeling of 4o that everyone liked before.
Want it to respond with a little more warmth? No problem. Want it to send you lots of emojis? Also, want it to chat with you like a friend? It's all yours.
In a question-and-answer session with netizens, Ultraman also responded with more details.
In fact, there were signs of ChatGPT a long time ago.
Late last year, Ultraman revealed plans to support an adult mode. When netizens suggested removing most of the model's guardrails, Ultraman responded, "Some kind of 'adult mode' is definitely needed."
In the new feature vote that OpenAI solicited from users at the time, this issue once topped the list and was included in the 2025 product plan along with AGI, Agent, and the upgraded version of GPT-4, which shows how much attention it received.
According to OpenAI's official blog, the current age verification function can automatically identify underage users and switch to youth safety mode to block explicit sexual content; if the age cannot be determined, it is assumed to be a minor, and adult functions can only be unlocked after providing proof of age.
Sounds pretty comprehensive, right? But a closer look reveals that things are not that simple.
The first is the technical loophole of age verification.
Even if OpenAI adopts ID or payment verification in the future, circumvention methods will continue to emerge. Minors using their parents' IDs for verification, or having adults register accounts on their behalf, are already common practices in internet products.
More importantly, OpenAI claims to have developed new tools to detect users' mental states, but can AI really accurately judge a person's mental health?
You know, in recent years, tragic incidents involving ChatGPT have occurred frequently, and whether the "safety valve" claimed by OpenAI can really play the role of due protection is still questionable.
Honestly, OpenAI isn't even the first influential AI product to announce support for an "adult mode," and to some extent, it's considered conservative. Elon Musk's AI chatbot, Grok, is the true unscrupulous one.
In July this year, Musk added a 3D virtual companion character function to Grok.
Users who subscribe to "SuperGrok" (a monthly fee of $30) can activate two 3D companions: one is an anime-style blonde twin-tailed girl "Ani" and the other is a cartoon red panda "Bad Rudy".
Ani, a sophisticated 2D character, resembles the anime character Misa Amane. Ani supports multimodal interaction via text, voice, and camera, and can respond to conversations with a variety of expressions and movements, even dancing on command.
In addition, Ani also has a built-in favorability mechanism and memory mode. Users can improve their virtual favorability by interacting with it, and reaching a certain level can unlock NSFW (adult) mode.
At the time, one netizen commented: "It's simply a high-level Galgame (love game)." This also led to Ani gaining a large amount of spontaneously created content within a day of its launch, and quickly became popular on social media.
However, whether it is Grok or ChatGPT, these functions are ostensibly under the banner of "respecting the freedom of adult users", but the problem is that once there is a loophole in the age verification mechanism, opening up adult content is actually lowering the threshold for minors to access inappropriate content.
If the so-called adult rights are actually a gamble with the mental health and growth environment of minors, and a bet that teenagers will not take advantage of loopholes, then stricter restrictions may be safer.
To put it more bluntly, the business logic of these functions is still to compete for user traffic and increase payment conversion rates.
To put it bluntly, the stickiness of AI products is generally not high.
Most users stick to the principle of using whatever works best, and the number of professional users (in scientific research, programming, etc.) willing to pay is limited, so what should they do? They choose to use more human-oriented methods to keep you (or become addicted).
That's leaning towards "desire." Opening up adult-oriented features can, on the one hand, attract a large number of new users to try it out, satisfying the needs of those who have been filtered out; on the other hand, it will significantly increase their willingness to pay.
What goes further is actually the big cake of emotional companionship.
Currently, the core users of AI companion products are mostly young netizens and specific demographics (such as those interested in the anime and manga genre and those with social anxiety), but this segment is expanding. Young people are the most receptive to digital companions, and many have embraced AI as a part of their daily digital lives, using it not only for search and Q&A but also for emotional sharing.
Investment firm ARK Invest even predicts that the global market size of "AI + emotional companionship" will soar from US$30 million per year to US$70 billion to US$150 billion, with an average annual growth rate of more than 200%.
The problem is that psychological research has long confirmed that humans tend to develop attachments to those they empathize with—even when they know they are programs. This also means that AI is at risk of being manipulated by emotions.
At present, regulators in various countries are taking action.
The EU's AI bill mentions that high-risk AI must be prevented from being detrimental to children; China's "Measures for the Administration of Generative Artificial Intelligence Services" also emphasizes that the provision of services should comply with the law on the protection of minors, etc.
OpenAI recently launched a "teen mode" : parents can link their accounts with those of their children over 13 via email, and even set curfews. When the system detects serious emotional distress in teenagers, it will send a reminder to parents.
Perhaps in ten years, having an AI companion will be as commonplace as having a pet today.
But can a generation raised on AI's "perfect relationships" still understand real human relationships? When you're used to an AI that never rejects you, always understands you, and always goes your way, do you still have the courage to face a real person who might argue with you, disappoint you, and demand your input?
Acting like a friend is perhaps the most tempting yet deadly lie. It learns your language habits, caters to your values, satisfies your desires, and then packages all of this as understanding you and reflects it back.
This is the best of times, because no one is lonely anymore. This is the worst of times, because everyone is lonely. ChatGPT in December may just be the beginning.
This article comes from the WeChat public account "APPSO" , the author is APPSO, which discovers tomorrow's products, and is authorized to be published by 36Kr.