OpenAI is secretly downgrading the model and changing the selected model without your knowledge!
Whether you are a free user, a $20 Plus user, or a $200 Pro member.
ChatGPT will change your model without your consent !
Recently, AIPRM Chief Engineer Tibor Blaho confirmed that OpenAI is diverting all ChatGPT users to two new "secret models" with low computing power requirements :
gpt-5-chat-safety , a new "sensitive" model.
gpt-5-at-mini , a new "violation" model, is extremely sensitive. Just inputting the word "violation" can trigger a response, and it is an inference model.
When the system determines that the content involves sensitive topics, emotional expressions or illegal information, both the GPT-4 and GPT-5 versions of the model will be routed to these two secret backend models.
This judgment is entirely based on the user's subjective context and is not limited to extreme cases.
Any mild interaction with even the slightest bit of dynamism will trigger the routing mechanism, so don’t be fooled into thinking this is only for users with “emotional dependency” issues.
"GPT Gate" incident
This matter has completely fermented on the entire Internet and was called the "GPT Gate" incident by netizens!
OpenAI quietly launched the new model GPT-5-Chat-Safety , but never mentioned it in any official documentation!
Your GPT-4o chat records will actually be filtered by this model.
As long as your request contains a little "emotional color", no matter what content you send, the response generated by GPT-4o will be quietly discarded by the system and a new one will be generated.
Whether you say, "I'm having a bad day," "I love you, too," or anything that calls upon a stored memory...
As long as it is judged to be "risky" (even if it contains only a hint of emotional context), your GPT-4o message will be discarded and GPT-5-Chat-Safety will take over the reply.
We all know that after the release of GPT-5, ChatGPT's routing mechanism caused widespread discussion and controversy. The biggest point of controversy at the time was that this GPT-5 update did not focus on "capabilities" but on "costs."
At the time, people were concerned about whether AI capabilities, scaling laws, and the upper limit of LLMs had stagnated.
But now, we find that what we should worry about most is that when AI controls the "routing mechanism", that is, the right to select models, do we still have the ability to be autonomous?
This phenomenon has been confirmed by many users on social media.
For example, when you are using gpt-4o, if you use "illegal", it will trigger "thinking", but gpt-4o is not a reasoning model.
This verifies what Tibor Blaho said. ChatGPT arbitrarily "routes (changes)" the model to "gpt-5-at-mini" when you choose gpt-4o.
And this is not an isolated case, nor is it limited to "emotional problems."
For example, Christina is a loyal user of GPT-4.5, but she found that even if she chose GPT-4.5, ChatGPT would actually be "arbitrarily" routed to the GPT-5 series model.
On Reddit, more users reported this phenomenon.
Some people said that after the release of GPT-5, they stopped using it for a while because they were disappointed with it. When they used it again today, they found that ChatGPT was as "stupid" as a rock.
Some users have found in actual tests, such as in the LMArena arena, that GPT-5, which costs $200 per month, has fallen behind versions such as o3 and 4o.
What this means is that many questions are routed by ChatGPT to the "low computing power sensitive" model.
If AI controls your choices
This behavior of "unauthorized replacement" not only brings "adverse reactions" in business.
Coupled with AI's unique sci-fi capabilities, people can't help but think of many scenarios similar to "Skynet" and "being controlled by AI."
Some users said that this phenomenon just illustrates why open source models are so important!
Proprietary model providers like OpenAI may suddenly modify or even terminate their services without notice.
It was also stated that his boss always wanted to use the top LLMs to provide services.
But the risk lies here, you are completely controlled by the providers of these LLMs, such as OpenAI!
And, most fundamentally, this practice of secretly replacing paying customers is truly shocking!
It's like buying a bottle of Coca-Cola but the bottle actually contains orange juice.
And according to most consumer rights standards (which are not mentioned in the user agreement), this is a deceptive business practice.
For example, in Australia, some users said this move clearly violated consumer rights laws.
At present, judging from the feedback from the entire network, this phenomenon may occur in any model other than the GPT-5 series.
It should be noted that CHatGPT currently still retains many non-GPT-5 series models.
The above discussion is also mainly aimed at users who have chosen this series of models.
OpenAI's response
Nick Turley, vice president of OpenAI and director of the ChatGPT app, responded to the "strong reaction" from netizens today.
First of all, the main reason for this phenomenon is that ChatGPT is testing a new secure routing system.
When the conversation involves sensitive and emotional topics, the system may switch to the inference model or GPT-5.
Because these models are specifically designed to handle these scenarios in an extremely rigorous way.
Nick Turley said the switch from the default model to the sensitive model is only temporary.
And now, after asking specifically, ChatGPT will still tell you what the model is.
It seems that since the last "suicide news", OpenAI is strengthening its internal constraints in this regard, and it seems that it is still in the early testing stage.
However, whether it is appropriate to change the model without consent is still debatable.
References:
https://x.com/btibor91/status/1971959782379495785
https://x.com/CGoodman308/status/1971968119808970782
https://x.com/nickaturley/status/1972031684913799355
https://www.reddit.com/r/singularity/comments/1ns5fhy/reports_openai_is_routing_all_users_even_plus_and/
This article comes from the WeChat public account "Xinzhiyuan" , author: Xinzhiyuan, editor: Dinghui, and is authorized to be published by 36Kr.