That's very interesting.
Just today (February 6, 2026), Anthropic and OpenAI dropped two bombshells in just 20 minutes, as if by prior arrangement: Claude Opus 4.6 and GPT-5.3 Codex.
There was no pre-event hype, no build-up, and not even time for the market to digest it. This was a very clear "model encounter."
This head-on confrontation clearly marks the official transition of the AI competition from the "chat and dialogue ability competition" of the past two years to a completely new stage: the era of agent autonomy.
If we break down the two companies' announcements into "what capabilities they are actually strengthening," the differences become quite obvious.
01
To sum it up, Anthropic focuses on three key points: controllable agent behavior boundaries, stability in task breakdown and execution, and clearer human oversight and rollback mechanisms.
They didn't tout the Agent as an all-rounder that "can do everything," but emphasized one key point: the Agent must operate within a clear framework of rules, permissions, and auditing.
OpenAI's approach, however, is clearly more radical:
Stronger autonomous planning capabilities, continuous execution across multiple steps and tools, and the model taking full responsibility for complex objectives. In short, the signal is direct: the agent can take over the entire task process, from understanding the objective to delivering the final result.
In summary, Anthropic is emphasizing "avoiding mistakes," while OpenAI is betting on "getting it running first."
This is a debate over policy directions.
Many people's first reaction is to look for benchmarks, parameter size, or generational differences in models, but this time there is really no need to focus on these, because the essence of this competition is: should AI be made into a "reliable execution tool" or a "highly autonomous action system"?
In the chat era, the cost of model failure was extremely low. If a user made a mistake, they could simply ask for clarification or ignore it. But in the agent era, the cost of making mistakes has increased exponentially.
The agent takes over the entire process. It breaks down tasks and selects tools on its own, running continuously when you're not watching. If it makes a mistake, then "it messed things up." Because of this, the two companies shifted their focus to agents at almost the same time, but took completely different paths.
Why now? Why the rush?
There are at least three reasons behind this: First, the form of chat-type products has reached its peak . Whether it is daily active users, usage frequency, or the perceived improvement of "smarter" for users, it has all slowed down; further competition in the dialogue experience means that the benefits of pushing it up further are becoming less and less.
Second, if enterprises truly want to "do the work for me," scenarios such as automated processes, R&D collaboration, operational execution, and analytical decision-making all essentially require agents.
Third, and most importantly: whoever defines how agents work first will have the opportunity to define the infrastructure for the next generation of AI, which is a war for ecosystem dominance.
So, from a product logic perspective, what does Agent mean?
If ChatGPT is the "search and content portal in the AI era," then Agent is more like a "digital employee" in an enterprise, an automated execution layer at the operating system level, and the core connecting models, tools, and the real world.
This also means that the standards for evaluating AI have changed. In the past, we looked at whether the answers were accurate and human-like. Now, we look at whether the task completion rate is high, whether the continuous operation is stable, and whether errors can be corrected.
This represents a significantly more challenging upgrade for the model, platform, and developers.
Therefore, one signal is very clear: the watershed moment has arrived. The focus of the competition going forward will be on system-level capabilities. Whoever's agent is more reliable and has clearer boundaries will be able to be truly used by enterprises in their production processes.
02
So, where does SaaS fit into this agent shift?
The recent sharp decline in SaaS and AI applications is not surprising at all, because the market has finally realized a more fundamental issue: when agents start to take over "doing things," the value foundation of traditional SaaS is shaken.
Over the past two decades, the core logic of SaaS has been very simple: selling "tool usage rights". If your company has 100 people, I will sell you 100 accounts. In essence, I am selling you a process framework so that people can follow a predetermined path, click a few points, fill in a few fields, and perform a few operations.
Efficiency improvement relies on "systematization," not "automation."
The emergence of agents directly challenges this premise. Now everyone is more concerned about how much value SaaS still has. Let me make it clear first: agents will not replace all SaaS all at once.
The real problem is that when an agent can perform tasks across systems, the "interface value" of SaaS collapses.
In the Agent era, user needs have become: give the Agent a goal, and it will automatically call CRM, spreadsheets, BI, email, and internal systems to deliver the results directly.
This means that many things that SaaS companies pride themselves on, such as function menus, operation paths, user training, and user learning costs, suddenly cease to be competitive advantages when faced with agents.
The market is now pricing in this change in advance. SaaS is being downgraded, which is why SaaS stocks fluctuate so much when there is a flurry of news related to agents. Capital is also recalculating. If users ultimately use functions through agents, how much premium can SaaS still charge?
When the entry point changes from "people to system" to "agent to system", SaaS changes from "front-end product" to "back-end capability module", and back-end modules are inherently subject to price reductions.
Therefore, the real danger lies in "process-based SaaS".
Products that rely heavily on manual operation include: management systems that are process-intensive and lack intelligence; tools that rely on complex operation to create user stickiness; and products that require a lot of manual maintenance, data entry, and approval.
These systems exist on the premise that humans must participate in every step, but the core value of Agent is precisely to "automate the steps themselves".
Is there still a chance for SaaS? I think so, but the role needs to change.
In the Agent era, SaaS either moves upwards or downwards. Moving upwards, it becomes the "command center" and "control layer" for the Agent, providing functions such as permissions, auditing, compliance, and result verification; moving downwards, it becomes a high-quality capability interface that the Agent can call, completely becoming API-based and modular.
The most dangerous situation is being caught in the middle, wanting to continue selling interfaces and accounts but unable to control the agent entry point. Therefore, returning to today's "model clash," the disagreement between Anthropic and OpenAI will directly impact the fate of SaaS.
Anthropic's approach is more beneficial to enterprise SaaS companies that emphasize compliance, security, and boundary control; OpenAI's approach, on the other hand, is more likely to accelerate the encroachment of "results-oriented agents" on the SaaS front end.
You'll find that they're all trying to define "who can redistribute the value of the software industry chain".
If we were to summarize SaaS, it would probably be this: Agents are forcing SaaS to answer a more brutal question: What are you selling, a tool or a result? And capital has already cast its vote.
03
Given this, how should we collect money from SaaS companies in the future? This is a fundamental question about business models and business efficiency.
Over the past two decades, the secret to wealth in the SaaS industry has been remarkably simple: "Charging per person."
If your company has 100 people, I'll sell you 100 accounts. Behind this lies an unspoken idea: software is just a tool; the work still needs to be done by people. Because human output is limited, the number of accounts represents the company's scale and purchasing power.
This idea completely falls apart in the face of "digital employees" like Claude 4.6 and GPT-5.3.
When Claude 4.6 arrives with its "Agent Squad," it's there to "do the work for you." Now you might only need one supervisor with one AI agent.
This is where the awkward part comes in: what about the remaining nine accounts? Of course you'll want to cancel them. This is the most terrifying "death spiral" the SaaS industry is currently facing: the more advanced and intelligent the product, the less money you actually receive.
This logical contradiction has cornered many traditional SaaS vendors. If they do AI too well, they're shooting themselves in the foot; if they don't, their small competitor next door, who fully embraces agents, will eliminate them with lower prices and more direct results.
To put it simply, SaaS used to sell the "right to use the tool," but in the future, people will be buying the "completion of the task." The core difference is that before, you bought the "process," and now you buy the "result."
It's clearer from a different angle:
If you buy an electric drill, it's a tool, and you have to drill the hole yourself; but if there are many people in your family, and everyone wants to drill a hole, you'll have to buy several.
If there were an "automatic drilling service" available now, you could simply say, "I want to drill a 5-centimeter hole in this spot," and the hole would appear automatically. Would you still care who owned the drill, what it looked like, or how many buttons it had? You wouldn't care at all; all you'd care about is whether the hole is accurate.
This is what I've been saying all along: the "interface value" of SaaS is collapsing.
In the past, SaaS companies worked hard to improve the UI, interaction, cultivate user habits, and design operation paths, trying every means to make you feel that "this software is easy to use and I am used to it".
This habit is the moat; changing to a different system would require retraining employees, which is too costly.
In the Agent era, this moat instantly turned into ruins, because from then on, it was not "people" using the software, but "Agents" using it; Agents have no emotions, do not need a beautiful UI, and do not need user education, all they need is an API interface.
If vendor A charges 1 cent per minute for its interface today, and vendor B charges 8 cents per minute tomorrow and is more stable, the agent will switch without hesitation; this means that SaaS is being downgraded from a "front-end product" to a "back-end capability module".
Once it becomes a backend module, it loses "control" over the user. The user only talks to that smart agent, and the user may not care at all whose interface the agent is connected to.
This transfer of power is fatal for SaaS companies because backend modules are standardized, and standardization means engaging in price wars with profits as thin as paper. That's why SaaS stocks have plummeted in the last two days; investors are worried about this very issue.
Therefore, SaaS vendors will change from "landlords who collect rent" to "workers who transport models".
04
Since the value of the interface has collapsed, who is the "gateway to everything" in the AI era? Or, to put it another way, on whom does the Agent actually reside?
You can actually think of mobile phones and computers as a huge "information island cluster". In order to protect their own territory, large companies have deliberately built many of these disconnected barriers.
If you want to hail a ride, you have to find the app icon first; if you want to process data, you have to switch back and forth between different apps, turning yourself into a "human data cable." This fragmented interaction is essentially a way for big companies to collect "attention tax."
But this clash between Claude 4.6 and GPT-5.3 is actually declaring that the barriers built up by "software walls" are being eroded by the "strong alkali" of agents.
When agents begin to take over tasks, apps degenerate into components hidden behind the scenes. This means that the power center of the internet is being massively "intercepted." Whoever holds the command box of the agent holds the "dispatch power" of the entire digital world. This is a particularly terrifying traffic funnel.
Imagine this: in the future, when you buy plane tickets, book hotels, or even write a piece of code, you will no longer think about "browsing through a certain app." The agent will directly filter, make decisions, and execute for you.
At this point, those search engines, vertical e-commerce platforms, and social media apps that originally controlled the entry points will suddenly find themselves "outmaneuvered."
This is why OpenAI and Anthropic are so aggressive this time, even at the risk of offending Microsoft and Apple, in order to seize control of the desktop platform. They understand very well that whoever defines the interaction of the Agent becomes the de facto operating system of the AI era.
This is like building a "castle in the air" on someone else's territory, where the underlying Windows or iOS becomes just a power supply system and low-level protocols, while the "skin" that actually interacts with the user is taken away by the Agent.
This transfer of power will also directly lead to the "decentralization" of hardware.
The reason we need a 13-inch screen, a precise mouse, and a screen full of icons is because we need to manually operate those complex software interfaces.
If all of this is simplified into a "digital manager" that can communicate at any time, then whether we are holding a mobile phone, glasses, or a pendant will no longer matter. Hardware will gradually become "lighter," even so light that we will not feel its presence.
The ecosystem walls that big companies have painstakingly built over the past twenty years are, in Agent's logic, like coachmen discussing how to improve their whips in the automobile age.
People will suddenly realize that what we've always wanted is "to get results".
At this juncture, we are witnessing a redistribution of "digital sovereignty." Will model companies bypass the underlying infrastructure and directly take over users? Or will system giants turn around and put agents in a cage? I'm not sure of the answer, but the big companies should react soon.
What is certain is that we are bidding farewell to an internet centered on "software" and entering a digital world centered on "tasks".
In this world, users only care about one thing: Can you get this sorted out for me? Once "getting things sorted out" becomes the core value, the entry point belongs to the layer that has the best scheduling ability, bears the consequences, and manages the risks.
Okay, note that AI application-related sectors will likely see a further drop soon, while model companies will likely see a rise. That's all I can say for now.
This article is from the WeChat public account "Wang Zhiyuan" (ID: Z201440) , authored by Wang Zhiyuan, and published with authorization from 36Kr.





