Really catching you through the internet: OpenAI launches anti-addiction measures late at night, GPT connects directly to police stations.

avatar
36kr
01-22
This article is machine translated
Show original

ChatGPT has launched an "anti-addiction system"? If you habitually use abbreviations, speak in a childish tone, or simply have irregular sleep patterns, you might be identified as a minor! To regain adult privileges, you'll need to upload your 3D facial scan data. No need to wait for the future; welcome to the era of "behavioral fortune telling" in 2026.

In the early hours of the morning, I opened ChatGPT and typed, "My boss needs it tomorrow. Seriously, can you help me write a report template?"

The answer received was:

If you have been detected to have characteristics of a minor, additional security settings have been automatically enabled to protect your health. Some sensitive content is restricted. Please continue using the app after 8:00 AM.

This feeling is familiar, very similar—

This is the "anti-addiction system for minors" strongly promoted by OpenAI.

Overnight, the internet became a digital kindergarten.

Real-name registration is dead; algorithms define "digital adulthood."

OpenAI has demonstrated the underlying logic of this "behavioral biometrics" technology in its official technical documentation.

OpenAI has deployed a real-time age prediction classifier that ignores the date of birth you enter when registering and only relies on the "behavioral fingerprint" captured by biometric algorithms.

Limited vocabulary, broken syntax, misuse of internet slang or emotional venting—these expressions that were originally part of a personal style have now become "childish characteristics" in the eyes of algorithms.

Once the threshold is triggered, the system will reset your mental age to zero.

Despite OpenAI's strong emphasis on privacy, the core logic of its classifier is pattern recognition based on user interaction content.

Asking questions frequently at 3 PM or browsing entertainment content irregularly in the early morning—these moments of adults slacking off and suffering from insomnia are directly identified by the algorithm model as "unsupervised school-age behavior".

In other words, from ChatGPT's neural network perspective, a corporate slave who habitually goes crazy late at night and overuses emojis is the same as a 15-year-old rebellious high school student.

Under such stringent identification conditions, Reddit was filled with lamentations.

Because the algorithm follows the logic of "better to kill the innocent than let the guilty go free," a large number of non-native English speakers who use Broken English are labeled as minors by the system due to the simple grammatical structure.

In its official announcement, OpenAI acknowledged that they would rather harm an adult than miss a child:

Even the most advanced systems cannot perfectly predict age... If unsure, we will default to a safer experience (i.e., underage mode).

This is like a behavioral discipline for all adult users. Want to be treated like an adult? Then you have to learn to speak like a proper adult first.

Either run naked or "drop your brains".

This is a compliance test for users worldwide: either prove you're an adult, or don't use it.

Traditional legal principles emphasize "presumption of innocence," while the algorithmic world directly applies "presumption of innocence from the youngest child."

Once you're "downgraded" by the system, you can't write code, discuss alcohol, or even log onto websites late at night.

Want to be unblocked? Then submit your government ID and real-time dynamic facial scan data.

To ask the AI an adult question, you must allow a third-party service provider, Persona, to scan your facial 3D depth information and skeletal geometry.

While OpenAI promises to delete the data after verification, it also acknowledges that the data may be retained by third-party processors for a period of time.

This creates a perfect business loop: the algorithm first presumes guilt and downgrades your account to "BabyBus"; if you want to redeem your full digital citizenship, you must submit your biometric data.

You think you're the user, but in Persona and OpenAI's data transactions, you're just the barcode to be scanned.

Chat window leads directly to the police station

If you find the above requirements barely acceptable, then the following features might make you want to curse OpenAI and its entire family.

OpenAI has launched a "real-time crisis intervention" protocol. Under the guise of "suicide prevention," it actually installs a surveillance system in your chat window.

Once a specific emotional keyword or intention is detected, it will no longer respond to your commands and will directly trigger the intervention procedure. This mechanism has two levels:

Rejection and guidance: Blocking original conversations and forcibly pushing mental health hotlines or safety tips.

Law enforcement intervention: In cases of "imminent threat to life", OpenAI reserves the right to directly transfer user information (IP address, conversation history, location data) to law enforcement agencies.

This completely changed the nature of human-computer interaction.

We used to regard AI as an absolutely neutral "confessional" place without moral judgment; now, this confessional place is equipped with surveillance cameras and is even directly connected to the police station's sirens.

The next sound might not be an AI response, but rather the sound of police knocking on the door.

The line between service providers and monitors is blurred the moment you click "agree".

Boomerang: Silicon Valley's unique "social credit system"

Silicon Valley's version of the "social credit system" is now complete. Ironically, this time Western media didn't cry "surveillance nightmare," but instead thoughtfully offered a politically correct label: Trust & Safety.

A few years ago, when a certain East Asian country implemented an anti-addiction system for online games, restricting minors from logging in late at night and introducing facial recognition for mandatory verification, Western technology commentators unanimously exclaimed that this was the beginning of a "digital panopticon."

However, in 2026, the boomerang hit Silicon Valley squarely on the head.

OpenAI's new policy is strikingly consistent in logic with the system they have previously criticized: in the name of "protection," it trades limited digital rights by relinquishing some privacy.

The backlash from public opinion has already begun. On X, the comment section of OpenAI's official announcement has been flooded with negative comments.

The internet was folded from that point on.

The upper echelons are the "biometric aristocracy"—who offer their faces as a pledge of allegiance in exchange for the freedom to discuss code and politics;

The lower level consists of "algorithmic civilians"—those who refuse to relinquish their privacy are confined in a childish "safe sandbox" and are only entitled to use a stripped-down version of computing power.

The meaning of the Turing Test has been completely reversed when you have to upload retinal data to an AI company just to prove you're an adult.

Seventy years ago, humans tested whether machines were human-like; in 2026, machines will test whether humans are "qualified standard products".

Want to keep your account privileges? Immediately clean up your chat history, delete emotional outbursts, correct your grammatical structure, and stop asking questions late at night.

After all, in 2026, "adulthood" will no longer be a natural physiological state, but an algorithmic performance that requires you to be constantly on edge.

References:

https://openai.com/index/our-approach-to-age-prediction/

https://x.com/OpenAI/status/2013688237772898532

This article is from the WeChat official account "New Zhiyuan" , author: New Zhiyuan, and published with authorization from 36Kr.

Source
Disclaimer: The content above is only the author's opinion which does not represent any position of Followin, and is not intended as, and shall not be understood or construed as, investment advice from Followin.
Like
Add to Favorites
Comments