After a week of holding back, OpenAI's big news has finally arrived.
Last night, my friend was lying in bed scrolling through short videos when he opened his Chrome feed and his screen was filled with sensational headlines like Terminator Awakening and AI Locust Plague.
Upon closer inspection, it turns out that OpenAI has quietly pulled off a major feat. They stated that in order to prepare for the release of more powerful models in the coming months, they plan to fine-tune the existing model to support defensive cybersecurity use cases, thus creating a new model specifically for cybersecurity, codenamed GPT-5.4-Cyber .
Although it's not the GPT-6.0 that everyone was most looking forward to... new information is always better than no information at all.
(Image source: OpenAI)
As soon as the news broke, the Reddit forum exploded. A bunch of onlookers confidently said that this was OpenAI's counterattack against Claude Mythos, and that from now on, not only our web page code, but even our underwear will be seen through by machines. Nothing is safe anymore!
As a seasoned veteran who has been working in the tech industry for many years, I can only shake my head helplessly when I see these comments.
Putting aside the fact that GPT-5.4-Cyber is currently being rolled out on a small scale and access is only available to authorized cybersecurity professionals, it's already 2026, and even when a major company releases a new model, people are still so easily led by the nose and exploited by waves of anxiety-inducing marketing.
To figure out what this thing is all about and whether it's really going to ruin the livelihoods of security companies, let's take a closer look at what these tech giants are up to.
Added binary reverse engineering functionality, but only for expert use.
Let's start by talking about today's main subject, GPT-5.4-Cyber. What exactly is it?
According to official statements, GPT-5.4-Cyber is an optimized version of GPT-5.4 . This model has fewer functional limitations and stronger cybersecurity capabilities. It lowers the entry barrier for legitimate cybersecurity work and provides new features for advanced defense workflows.
The general-purpose models we used in the past were fine for writing love letters or looking up recipes, but if you really asked them to do hacking, they couldn't even find the backdoors.
However, the newly released GPT-5.4-Cyber adds binary reverse engineering capabilities . This tool doesn't require looking at the software's original code; it can directly extract security vulnerabilities from the compiled low-level files, much like a skilled butcher dissecting an ox.
(Image source: OpenAI)
Furthermore, to make it easier for security experts to work, OpenAI has deliberately tamed its temperament to be very docile. Previously, if you asked the general model how to find system vulnerabilities, it would righteously refuse to answer. Now, this optimized version is completely open to answering, and it can handle any stress tests you conduct without any problems.
If you're really interested, you could probably decompile this thing into Apple TV and then open-source it without much trouble.
However, you'll have to accept Apple's lawyer's letter yourself.
Of course, such dangerous goods cannot be left on the street for people to pick up, so OpenAI only made them available to certified security companies and enterprise teams, focusing on an ecosystem of internal defense.
To experience it, individual users can verify their identity through chatgpt.com/cyber , while enterprises can apply for trusted access for their teams through OpenAI personnel. All customers who pass this review process will receive an improved version of the existing model.
(Image source: OpenAI)
Interestingly, just a week before the release of GPT-5.4-Cyber, Anthropic had released a preview version of its own Claude Mythos.
However, they went to extremes, immediately stating that the model was "too dangerous" and refusing to make it available to the public.
According to the test report, Mythos went absolutely crazy during internal testing. This model, entirely on its own, thoroughly uncovered unknown vulnerabilities in major operating systems and browsers, and even stumbled upon a long-standing vulnerability that had been hidden in an open-source system for over twenty years.
(Image source: Anthropic)
Because this thing was so aggressive, Anthropic's top management was terrified and only dared to use it secretly in their own closed networks with the help of Microsoft and Google.
In the past, finding vulnerabilities was something top hackers would do, their eyes red from holding a cup of coffee, searching line by line through tens of thousands of lines of code.
Now, AI works 24/7, finding vulnerabilities faster than you can find memes. The attacker's barrier to entry has been completely shattered, and the defender's only option to survive is to hire AI to patch vulnerabilities as well.
These two models, one overt and one covert, have directly ushered in a new era of cybersecurity, where machines are battling machines.
Is corporate security a disaster or an opportunity?
Faced with this dramatic change comparable to the Industrial Revolution, the reactions of various figures were quite fascinating.
Those troll Reddit users were trembling with fear of being outmaneuvered, while simultaneously joking that these AI giants had finally realized they couldn't make money by doing high school students' homework and were now eyeing the lucrative corporate security market.
(Image source: Reddit)
However, government departments and financial giants are truly panicked. According to The Guardian, UK AI Minister Kanishka Narayan is actively convening representatives from major UK banks, insurance companies, and exchanges, while US Treasury Secretary Scott Bessant is also convening meetings with major Wall Street banks to discuss the potential cyber risks of such models.
Their caution is entirely understandable. Last month, the artificial intelligence tools of the Israeli startup Tenzai participated in a series of elite hacking competitions, and its model outperformed more than 99% of the human participants. Google also discovered several samples last year that directly connected to large models at runtime to generate malicious scripts.
(Image source: Forbes)
Regardless of whether people have been empowered or not, cyberattacks have certainly benefited from it.
As for your question about whether we need to start worrying unnecessarily or actively reject the application of large models, believing that new technologies will inevitably bring devastating security risks.
I don't think it's necessary.
According to Forbes, Jeremiah Grossman, CEO of cybersecurity firm Root Evidence, stated that only 10% to 20% of actual cyberattacks in the industry currently originate from exploiting software vulnerabilities; the majority of attacks are infiltrated through phishing or social means.
In other words, the remaining 80%-90% of attacks do not require a super-large model with tens of millions of computing power.
According to Sophos's "2025 State of Ransomware Report," a higher percentage of attacks are caused by operational factors, with lack of expertise accounting for 40.2% and insufficient personnel/capabilities accounting for 39.4%.
(Image source: Sophos)
As for the number one hacking method, it's that hackers obtained leaked account passwords from employees.
What follows? It's phishing emails sent to you by someone pretending to be your boss, tricking you into clicking on a link.
That's right, that's the truth of the industry.
No need for mythologizing, just face it with equanimity.
In my opinion, the release of the GPT-5.4-Cyber and Claude Mythos preview versions has indeed presented everyone with a harsh but real problem:
A wave of cyberattacks driven by artificial intelligence is on the horizon.
In terms of execution efficiency and speed, it is an indisputable fact that machines have surpassed humans in attack and defense. However, they have not created many new threats out of thin air; they have simply made existing attack methods faster and cheaper, more like a ruthless automated tool that does odd jobs for cybercrime teams.
Therefore, we cannot glorify the contributions made by these AI companies.
Jeremiah Grossman stated that the large number of vulnerabilities identified by Claude Mythos is making it difficult for security companies to prioritize vulnerability remediation, resulting in a large backlog of vulnerabilities.
What makes it too powerful? What harm does it pose to humanity? What about the AI locust plague? These tech giants are constantly boasting on social media and at press conferences about how amazing their upcoming big models are and how deep the vulnerabilities they can uncover. In reality, they are quite shrewd.
Their goal is to package AI as an omnipotent god, while simultaneously raising the industry's entry barriers, making you feel that if you cooperate with them, you can rest easy, and if you don't, your company will go bankrupt tomorrow.
It's pretty funny.
Of course, companies still need to pay some attention to this wave. Even though both major companies have said they won't unleash the use of large models when offensive capabilities far exceed current defense systems, at least they can't let it drag on for over two hundred days like before when patching edge applications took so long.
This slow, manual patching process is completely ineffective against machine-gunned attacks.
For ordinary people like us, being vigilant is the best form of security.
Ultimately, the weakest link in the online world is never the code, but the person sitting in front of the screen. Instead of panicking every day following the PowerPoint presentations of big companies, you should first make your password more complex.
After all, even the smartest AI can't wake up a fool who insists on sending money to a scammer.
This article is from "Lei Technology" and is published with authorization from 36Kr.




