ChatGPT emerged three years ago.
Having experienced a profound sense of dread for the first time, Google hastily sounded a "red alert."
But who could have imagined that just three years later, this "red alert" would be triggered in OpenAI's own home!
CEO Altman urgently issued an all-staff email, which contained only one sentence:
ChatGPT is in imminent danger.
Ultraman: We are at a critical time for ChatGPT
codename Garlic
OpenAI's Mysterious Model Revealed
This time, it's OpenAI's turn.
According to Altman, OpenAI plans to release a brand new inference model next week.
Internal assessments show that this model outperforms the Gemini 3, but there is still more work to be done to improve the ChatGPT "experience".
Furthermore, according to the latest report from Inforamtion, OpenAI is also developing a new generation of models, internally codenamed "Garlic".
"Garlic" has achieved a major breakthrough in pre-training.
It fixes issues in an early GPT-4.5 architecture and is expected to be released early next year as GPT-5.2/GPT-5.5.
At least in internal public testing, "Garlic" outperformed Google's Gemini 3 and Claude 4.5 Opus in coding and inference tasks.
Last week, Chief Research Officer Mark Chen gave a closed-door presentation to the internal team, the core of which was just one sentence: "Garlic" is ready!
In an interview yesterday, Mark Chen publicly responded that Gemini 3 is a powerful model, but OpenAI already has a model that can rival it.
This suggests that OpenAI has already quietly secured its next trump card.
Mark Chen's original words internally were:
We plan to release a version of Garlic as soon as possible. At the current pace, don't be surprised if we see GPT-5.2 or GPT-5.5 released early next year.
Previously, not only Information, but also SemiAnalysis and other foreign media outlets reported that since GPT-4o, OpenAI has not yet completed the pre-training of the next generation of cutting-edge large models.
Because of these challenges, OpenAI was forced to shift its focus to inference models.
In October, Altman assured everyone that OpenAI would release a new large language model codenamed "Shallotpeat" to challenge Google's Gemini 3.
Clearly, "Garlic" and "Shallotpeat" are two different models.
The former integrates the bugs fixed during the development of "Shallotpeat", and the most critical breakthrough occurred in the "pre-training phase".
As is well known, Google's greatest confidence in Gemini 3 lies in the "qualitative leap" achieved in the pre-training phase.
Even senior executives at OpenAI have privately acknowledged this.
However, during the development of "Garlic", OpenAI solved some key problems encountered in the previous pre-training stage.
Improve upon the previous "best" and "much larger" pre-trained model.
In other words, GPT-4.5, which was released in February of this year, was like a flash in the pan and has now faded into obscurity.
Essentially, these optimizations allow OpenAI to inject the same massive amount of knowledge into a smaller model. Previously, this could only be achieved by developing massive models.
Needless to say, developing a large model is definitely more expensive and time-consuming than developing a small model!
Mark Chen also revealed an even more explosive piece of news:
Building on the experience gained from "Garlic", OpenAI has quietly launched the next generation of larger and more powerful models.
In the past two weeks, the AI community has been focused on Google, and OpenAI has been unusually relegated to a passive "chasing" role.
Two weeks after the release of the Gemini 3, ChatGPT's daily active users dropped by 6%.
Before it can turn the tide, OpenAI must sound the alarm!
"Code Red"
The battle for survival has begun.
A few weeks ago, OpenAI announced that it had entered an "orange alert" state in order to improve ChatGPT.
Now, everything is even more urgent.
With the release of Code Red, projects that were previously scheduled have been postponed.
Advertising business : I originally wanted to start making money through search, but I'll put that on hold for now.
AI intelligent agents : those all-in-one assistants that can automatically buy tickets and register for medical appointments, wait a bit.
Pulse : The product that was supposed to send you a personalized news briefing every morning has also been canceled.
The goal is simple.
It means using all available computing power, human resources, and financial resources to serve one purpose:
To make ChatGPT even better now.
The moat is getting shallower.
Why go to such lengths?
OpenAI discovered that what seemed to be an insurmountable lead was being eroded little by little by its competitors.
1. Growth doesn't seem to be as rapid as before:
During a conference call with investors, the CFO hinted that some of ChatGPT's growth metrics are slowing down—this could be user numbers, usage time, subscriptions, and so on.
2. Google's counterattack is becoming increasingly threatening:
The powerful new generation of models is very attractive, and both users and developers are no longer focusing solely on OpenAI.
The addition of an "AI mode" to the search function makes searching feel like chatting with an AI.
After a combination of factors, Gemini's monthly active users surged from 450 million in July to 650 million in October.
In an internal memo, Altman warned that Google's resurgence in AI could bring "temporary economic headwinds" to OpenAI.
3. The amount of money to be burned is simply too much:
In the coming years, OpenAI will burn through hundreds of billions of dollars to train stronger models and get ChatGPT running.
Conversely, the revenue expected from ChatGPT subscriptions is: approximately $10 billion this year, $20 billion next year, and $35 billion in 2027.
Therefore, in order to keep this "cash-burning marathon" going, OpenAI hopes to raise approximately another $100 billion.
Whether it succeeds or not depends on how ChatGPT performs.
In summary, against this backdrop, any slowdown in growth or user churn will be amplified into a "life-or-death issue."
1. The battle for user numbers
OpenAI states that ChatGPT currently handles 70% of global "AI assistant activities" and 10% of "search activities".
Google showcased Gemini's rapid growth and its deep integration into its search and product portfolio.
2. Ecosystem vs. Blockbuster
OpenAI's current trump card is: an extremely powerful and widely popular ChatGPT + a set of developer APIs.
Google's offering is: Search + Email + Documents + Android + Browser + YouTube + ... + Gemini, the entire ecosystem is AI-enabled.
Where does OpenAI plan to invest its resources?
In the memo, Ultraman highlighted several areas that should be given top priority:
Allowing everyone to customize their own AI
He said he wanted to make the people behind the 800 million weekly users feel that this is "my ChatGPT," rather than a generic, mass-market tool.
It allows users to customize : its speaking style, preferences, workflow, and even remember who you are and how you do things.
This is consistent with the previously mentioned "Memory" function —AI does not just answer questions, but rather "knows you" over a long period of time.
ChatGPT now feels like a receptionist who has to reintroduce themselves every time they meet someone.
In the future, it will be more like a long-term assistant: remembering what job you do, how many children you have, what style you use when writing code, and what tone of voice you dislike.
This is crucial for increasing user engagement .
When a tool starts to "understand you," you're less likely to switch platforms frequently.
Image generation is the second battleground
Image generation is important because:
Many people may not use ChatGPT to write long articles for a long time, but they will often send raw images ;
This is a key entry point for connecting with creators, designers, and ordinary users ;
Image generation models can also be used to support many product scenarios (advertising design, e-commerce displays, game concept art, etc.).
Recently, Google has dominated the global AI community for several months thanks to the overwhelming lead of Nano Banana and Nano Banana Pro.
Therefore, it's not hard to understand why Ultraman listed image generation capability as one of the key features of Code Red.
Winning the mental battles of various public leaderboards
"Model behavior" includes several things:
The answer should be accurate and helpful, and avoid nonsense .
Is the tone pleasant, not sarcastic, and humane ?
Is it just the right balance between safety and openness?
What Ultraman wants is to significantly improve these "behaviors," so that users are more willing to choose the model behind ChatGPT rather than its competitors in public rankings like LMARaena.
Because these kinds of rankings have a significant impact on developers and heavy users , influencing which model they choose to build their applications.
Speed, reliability, and rejection mechanism
In addition, Ultraman specifically mentioned three areas for optimization:
Faster response time
Higher reliability
Less "over-rejection"
Speed is a critical factor not only for users but also for developers— high latency can completely ruin the product experience.
At the same time, "over-rejection" is also a very typical pain point: you ask a normal question, but the AI is scared by risk control and keeps saying "Sorry, I can't answer this question".
Their next task is to minimize accidental injury to those in need, within the safety red line.
what does that mean?
For ordinary users, the "time + habits" they invest on a platform will create a stronger sense of connection.
The future ChatGPT will increasingly resemble a "personal AI assistant" rather than a public question-and-answer machine. It will better understand your preferences, better "remember" things, and more like a long-term companion.
In terms of user experience, it will be faster, more stable, and less likely to be rejected without cause.
If we can make it easier for users to use, then we have the opportunity to increase their dependence on AI. That is, from "playing around occasionally" to "being inseparable from it every day".
Images, creativity, and multimodality will become increasingly important. This applies not only to "question and answer and writing" but also to the entire process of "writing + drawing + designing + researching".
For the industry, in the short term, "rollup experience" will be more important than "rollup parameters".
With model parameters constantly being upgraded, ordinary users can no longer distinguish between "1 trillion parameters" and "2 trillion parameters." However, users can immediately tell which one opens faster, which one is more stable, and which one understands them better.
For OpenAI, this is a battle that "may not determine its survival, but will have a huge impact on its valuation."
A $100 billion financing target and hundreds of billions of dollars in computing power investment both require a strong and stable cash cow to support them.
ChatGPT is that bull: it needs not only traffic, but also stickiness and willingness to pay.
Developers and entrepreneurs, on the other hand, need to start considering which "ecosystem stronghold" to align themselves with:
If ChatGPT continues to lead in user experience and reputation, it will become the "AI hydroelectric power station" that everyone defaults to connecting to;
If Google and others can make certain scenarios smoother, they will inevitably take away some new applications.
An arms race with no end
In short: in the AI field, there is no eternal throne.
Just three years ago, ChatGPT was the "dragon slayer" that gave Google a real scare, but now it is struggling to cope with the fierce counterattack from the search empire.
However, this is precisely the "era dividend" for ordinary users—the more aggressively they use it, the more enjoyable it is for us.
After all the trials and tribulations, the ultimate product experience remains the only true and enduring principle.
Reference: HJY
https://www.theinformation.com/articles/openai-ceo-declares-code-red-combat-threats-chatgpt-delays-ads-effort?rc=epv9gi
This article is from the WeChat official account "New Zhiyuan" , author: New Zhiyuan, and published with authorization from 36Kr.






