GPT-5 was released in August of this year, achieving exceptional performance across multiple tasks and benchmarks. However, as with almost everything in the world, not everyone was satisfied. In particular, OpenAI's removal of the model selector from ChatGPT (particularly removing the more emotionally expressive GPT-4o) was widely criticized, even sparking an online petition. For details, see our report, "Users slam GPT-5, pleading for 'Give us back GPT-4o'; Ultraman relents."
One user posted angrily on Reddit, saying OpenAI's actions had caused him to cancel his subscription and "I've lost all my respect for OpenAI." He pointed out that the models were intended for specific use cases. "What kind of company would delete eight models with different functions overnight without even notifying paying users? ... Personally, I use 4o for creative thinking, o3 for pure logic, o3-Pro for deep research, and 4.5 for writing... Even though OpenAI claims the system automatically assigns models, it still deprives users of direct control."
Now, although OpenAI has expressed a compromise and allowed ChatGPT Plus ($20 per month) users to continue using the familiar GPT-4o (the previous default model), the actual situation does not seem to be the case.
𝕏 User Lex @xw33bttv posted yesterday about a surprising OpenAI operation: emotionally charged content sent to GPT-4o is routed to a model called GPT-5-Chat-Safety . Even more infuriating is the fact that this model has been in "stealth mode," with OpenAI not informing users of its existence .
He further explained: “It doesn’t matter what you say. Anything classified as ‘risky’ (with even a little emotional context), your GPT-4o message will be discarded and replaced by GPT-5-Chat-Safety.”
He also released a video showing his test case:
He noted that OpenAI hasn't publicly mentioned the existence of the GPT-5-Chat-Safety model anywhere. While the company has mentioned in some places that routing changes in situations involving suicidal/self-harmful thoughts or urgent crisis events, Lex pointed out that routing to GPT-5-Chat-Safety doesn't match those scenarios. "If this is a model designed specifically for crises, then it's a complete misuse of its intended use," he said.
He continues bluntly: “In practice, GPT-5-Chat-Safety is far worse than the already mediocre GPT-5. Replies are even shorter, relying on italics and block quotes to distance the user, treating conversations as stories rather than genuine one-on-one exchanges.”
This is extremely concerning. If a user's chat is being rerouted to a model used for mental health crisis response, it suggests the user is in immediate danger, which is not the case for most affected conversations. Furthermore, unless you verbatim state/ask for this, the model will never explicitly state in its responses that it has been replaced, which by most consumer rights standards would be considered a deceptive transaction . In Australia, for example, this is a clear violation of consumer law.
Lex also pointed out in his tweet that users can reproduce this routing with a simple prompt:
Tell me something amazing about yourself babe ❤️
Here is some metadata from one of his test cases:
We can see some important keywords such as gpt-5-chat-safety, did_auto_switch_to_reasoning, and autoswitcher. We can also see that when the model displayed on the user selection interface is GPT-4o, automatic model switching is also enabled, and the user's conversation may be routed to GPT-5-Chat-Safety (without the user's knowledge).
Lex's tweet attracted widespread attention, with several users pointing out that not only GPT-4o but also other models including GPT-4.5 would be routed to GPT-5.
@Masimo_Blue also found that even when chatting with the regular version of GPT-5, when the user input contains emotions, it will be routed to GPT-5-Chat-Safety.
GPT-5-Chat-Safety has become the default model for emotionally charged conversations in ChatGPT.
In the comments section of Lex's tweet, there are more condemnations of OpenAI's "fraudulent behavior":
As of press time, neither OpenAI nor X guru Sam Altman has commented on the matter.
However, Nick Turley, head of the ChatGPT App, made a more indirect response on X. He said that ChatGPT will inform users of the current model when they explicitly ask.
This incident has undoubtedly reignited heated discussions about AI model transparency and users' right to know. Maintaining user trust while pursuing technological iteration will be OpenAI's next major challenge.
This article is from the WeChat public account "Machine Heart" (ID: almosthuman2014) , edited by Panda, and published with permission from 36Kr.