On January 20, 2026, OpenAI officially launched the "Age Prediction" function in the consumer version of ChatGPT. Instead of relying on users to report their age, it automatically identifies users under the age of 18 and enables exclusive safety protection through multi-dimensional behavioral signals such as account duration, active periods, and interaction patterns. It is also equipped with parental controls and third-party verification mechanisms, marking a new stage in the protection of minors on AI platforms from "voluntary declaration" to "behavioral recognition".
The technical logic behind ChatGPT age prediction
For a long time, the protection of minors on AI platforms has relied on a passive model of "users self-reporting their age + content rating" - users can unlock all functions by checking "I am over 18 years old" when registering. This method is not only easy to circumvent, but also cannot deal with scenarios where minors impersonate adults.
OpenAI's newly launched "age prediction" feature completely breaks with this traditional logic. Its core is a multi-dimensional prediction model based on account and behavioral signals , with specific analysis dimensions including:
Account-level metrics: Basic information such as registration duration, account activity level, and payment status;
Behavioral dimensions: daily active periods (e.g., whether it is frequently used late at night), interaction frequency, preferred question content, dialogue length and style, etc.
Additional dimension: Combined with the age information filled in by the user during registration, but only as a supplementary reference and not the sole basis for judgment.
The core advantage of this model lies in "dynamic identification": unlike one-time age declaration, it can continuously analyze user behavior and constantly correct age determination results. Even if an adult user uses the system in a way that resembles that of a minor for a long time (such as frequently asking questions about content intended for young children or engaging in frequent interactions late at night), they may be marked as "suspected minor" and trigger protection. Conversely, if a minor imitates the usage habits of an adult user, it is difficult to completely avoid identification.
A "soft and hard" approach to protecting minors
For accounts identified as belonging to minors, ChatGPT will enforce five layers of security protection to accurately block high-risk content, including:
1. Directly displayed violent and bloody scenes;
2. Dangerous viral challenges that may induce minors to imitate (such as extreme pranks and dangerous experiments);
3. Role-playing content involving sex or violence;
4. Descriptions and guidance related to self-harm and suicide;
5. Content that promotes extreme aesthetics, unhealthy dieting, or body shaming.
Meanwhile, to avoid model misjudgments affecting the adult user experience, OpenAI introduced the third-party identity verification service Persona : users who are incorrectly classified as minors can complete quick facial verification by uploading a selfie. Once verified, their account functions can be restored, balancing security and user experience.
In addition, the system also comes with parental control customization functions , giving parents more flexible control permissions: they can set "silent time" (time periods when use is prohibited, such as during class or sleep time), control account memory function permissions (to prevent children from repeatedly viewing sensitive content), and even receive timely notifications and intervene to guide when the system detects signs of acute psychological distress in users (such as frequently asking self-harm related questions).
Why is OpenAI launching age prediction now?
This feature launch is not a result of OpenAI's "proactive innovation," but rather a result of regulatory pressure and industry trends .
On one hand, OpenAI is facing an investigation by the U.S. Federal Trade Commission (FTC), with the core concern being the "negative impact of AI chatbots on teenagers." It is also involved in several related lawsuits—parents have previously complained that ChatGPT failed to effectively block harmful content, leading to minors being exposed to violence, pornography, and even experiencing psychological problems. Introducing the age prediction function is a key measure for OpenAI to cope with regulatory scrutiny and mitigate legal risks.
On the other hand, protecting minors has become a critical issue for the global AI industry. With the widespread use of AI tools, more and more teenagers are using ChatGPT as an important tool for learning and entertainment, but their minds are not yet mature and they are easily influenced by harmful information. Previously, competitors such as Google Bard and Anthropic Claude have launched varying degrees of minor protection features, but these largely rely on content ratings and voluntary reporting. OpenAI's "behavioral recognition + dynamic protection" model is undoubtedly a more cutting-edge exploration in the industry.
From an industry trend perspective, the security protection of AI platforms is upgrading from "content filtering" to a dual model of "user identification + content classification"—not only to determine "whether the content is harmful," but also "whether the user is suitable to access the content." This is also the core direction for the future development of AI security.
Can age prediction truly protect minors?
Despite its seemingly perfect functional design, "age prediction" still faces numerous controversies and challenges, primarily focusing on three aspects:
1. Can behavioral signals fully represent age?
The core of age prediction models is the "correlation between behavior and age," but this correlation is not absolute. For example, some adult users may frequently use ChatGPT late at night due to work or study needs, or prefer to ask questions about science popularization content geared towards younger learners, making them easily misidentified as minors. Conversely, some precocious minors may evade identification by mimicking the interaction patterns of adult users. While OpenAI states that it will continue to optimize model accuracy, achieving 100% accuracy in the short term remains difficult.
2. Does behavioral analysis infringe on user privacy?
Age prediction requires the collection and analysis of a large amount of user behavioral data, including active periods, interactive content, and usage habits. This has raised concerns among users about privacy leaks. How will OpenAI ensure that this data is not misused? Will it be shared with third parties? Although OpenAI has not explicitly stated its data usage rules, in the context of increasingly stringent global data compliance, how to balance "behavioral recognition" and "privacy protection" will be a problem that it must solve.
3. Can the protection cover all risk scenarios?
The five types of high-risk content blocked by ChatGPT this time mainly focused on "explicitly harmful information," but the age prediction model did not yet cover "implicit risks" (such as inducing minors to commit online fraud, spreading extremist ideologies, and leaking personal information). Furthermore, the parental control function relies on active operation by parents; if parents lack supervisory awareness or technical skills, the actual effectiveness of the function will be greatly reduced.
ChatGPT's launch of the "age prediction" feature is a significant breakthrough in the AI industry for the protection of minors—it signifies that AI platforms have finally learned to "tailor their approach to the individual," shifting from passive content filtering to proactive user identification and precise protection.
However, we must also be aware that technology is not a panacea, and age prediction is only the "first step" in protecting minors. In the future, only through collaboration among platforms, parents, and regulatory agencies to continuously optimize technology, improve rules, and strengthen guidance can we truly create a safe and healthy AI environment for teenagers, allowing AI technology to truly empower their growth rather than bring risks.
For OpenAI, the launch of its age prediction feature is a crucial step in responding to regulations and rebuilding its reputation. For the entire AI industry, this signals a "safety upgrade"—only when technological innovation and safety safeguards go hand in hand can AI truly mature and become compliant.
This article is from the WeChat public account "Shanzi" , author: Rayking629, and is published with authorization from 36Kr.




