[Introduction] OpenAI is in a hurry! A week after Claude Mythos was leaked, OpenAI officially released GPT-5.4-Cyber. This powerful model, specifically tuned for security defenses, can not only analyze malware without source code, but has also automatically fixed 3,000 high-risk vulnerabilities.
OpenAI is following in Anthropic's footsteps and starting to play the "mystery" card!
Just now, OpenAI officially released the new GPT-5.4-Cyber model, which is only available to cybersecurity experts.
As the name suggests, this is a powerful model specifically designed for fine-tuning security defenses.
At the same time, OpenAI upgraded its Trusted Access for Cyber (TAC) program for cybersecurity today.
First came Claude Mythos, then came GPT-5.4-Cyber.
GPT-5.4-Cyber Released!
With the step-by-step increase in model capabilities, GPT-5.4 has been rated as having "high" level cybersecurity capabilities.
To unleash its full potential in defense, OpenAI decided to launch the more targeted GPT-5.4-Cyber .
This model is specifically designed to reduce the rejection rate in legitimate security research, thereby reducing obstacles for security experts when conducting vulnerability analysis and code auditing.
Moreover, security practitioners can now use it to analyze compiled software.
Even without source code, it can efficiently identify malware tendencies and assess software robustness.
Project TAC: Building a Wall of Trust in the AI Era
OpenAI's core strategy lies in empowering defenders.
Through the TAC program, the official system provides automated authentication to reduce technical limitations and obstacles in security tasks.
Its cybersecurity strategy is guided by three principles:
- Democratized Access
Defense tools should not be the exclusive domain of large corporations.
With KYC (Know Your Customer) verification, whether you're a top individual or a small team, as long as your identity is legitimate, you can use the most advanced AI to protect critical systems.
- Iterative Deployment
Safety capabilities cannot be developed in a laboratory without human intervention.
OpenAI will make the model more robust in real-world scenarios by deploying it on a small scale and continuously testing its resistance to jailbreaking and attacks.
- Investing in Ecosystem Resilience
The community's resilience is enhanced through a security grant program of up to $10 million, contributions to open-source security projects (such as Codex for Open Source), and the Codex Security tool for automatically detecting and patching vulnerabilities.
Don't think this is just a concept; Codex Security has already delivered results:
It can automatically monitor code, report vulnerabilities, and even write out fixes. Since its release, it has fixed over 3,000 high-risk and critical vulnerabilities.
This "security is done as soon as the code is written" model is the future trend.
Entrusting the task of containing the most powerful AI to the "gatekeeper."
A few days ago, Axios published an exclusive report that first reported on this security model based on GPT-5.4 fine-tuning.
In fact, after GPT-5.3-Codex went live, OpenAI launched a pilot program for TAC internally.
Moreover, it is limited to a select group of people, with the core objective being to accelerate defensive research using high-capability models.
Unexpectedly, Anthropic once again seized the initiative in promotion and marketing.
Last week, Claude Mythos (preview version) released a comprehensive benchmark test. At the same time, Anhtropic joined forces with 40 giants to launch "Project Glasswing".
They initially invested $100 million to give top tech giants like Apple, Google, and Nvidia early access to Claude Mythos.
OpenAI waited a week before receiving the license, which was late but not too late, since everyone still couldn't use it.
Simply put, Anthropic and OpenAI aim to precisely hand over the most powerful AI capabilities to the "goalkeepers."
Despite the tight defenses of Silicon Valley giants, a recent report from AISLE last week poured cold water on the reality—
Currently, some smaller models, including GPT and DeepSeek, already possess the ability for Mythos to discover some vulnerabilities.
This means that AI giants are in a delicate game: they must prevent their own sharp edge from harming others, while also building a sufficiently strong AI defense system before more dangerous and uncontrollable models emerge.
This competition between AI's "aggressive power" and "control" has only just begun.
References:
https://openai.com/index/scaling-trusted-access-for-cyber-defense/
https://www.axios.com/2026/04/14/openai-model-cyber-program-release
This article is from the WeChat official account "New Zhiyuan" , edited by Taozi, and published with authorization from 36Kr.




