Source: Synced
Just now, a "closed-door" event occurred in the AI community that is worthy of being recorded in history.
Anthropic has officially banned the use of its own plans to access OpenClaw!
Boris Cherny, the creator of Claude Code, announced:
Starting at 3 p.m. Eastern Time on April 4 (3 a.m. Beijing Time on April 5), Claude blocked all third-party tools, which can only be used by using additional packages or APIs.

This means that tens of thousands of developers and startups who relied on OpenClaw to improve efficiency lost the benefit of "unlimited" use overnight and were forced to switch to the extremely expensive " pay-as-you-go " model.
Claude's loyal users on OpenClaw have taken a heavy hit.
The timing of this announcement is also quite intriguing—Peter Steinberger, the father of OpenClaw, had recently moved to OpenAI, and Anthropic's intentions are quite clear!
This can be described as a commercial revenge disguised as policy.
This news quickly rose to the top of the Hacker News developer community's leaderboard.

Let us remember this day: April 4, 2026. From this day forward, the AI industry will shift from open collaboration to a fragmented landscape dominated by giants.
An official email from Claude confirms that Anthropic is targeting OpenClaw this time!
This policy will be enforced first on OpenClaw starting April 4, but will apply to all third-party toolchains and will soon be extended to more tools.
As a gesture of goodwill, Anthropic offered a one-time payment equal to one month's subscription fee. Valid until April 17th.
The hammer fell
Note that the timing of Anthropic's email was quite inappropriate.
April 3rd, Friday evening, is exactly the time that internet companies love to release bad news.
Anthropic sent a notification to tens of thousands of OpenClaw users: Starting tomorrow, your Claude subscription quota can no longer be used for OpenClaw. If you want to continue, you will be charged on a pay-as-you-go basis.

https://x.com/VadimStrizheus/status/2040199979927482618
After three months of encirclement and suppression, the final blow finally fell.
The trigger: A "defection" that could change the entire landscape.
Why would Anthropic, at this particular time, so ruthlessly destroy an open-source tool?
Because Peter Steinberger, the key figure behind OpenClaw and the father of lobster, joined their arch-rival OpenAI.

Peter Steinberger was once one of the developers who understood the Claude ecosystem best; his OpenClaw made Claude incredibly user-friendly.
Now, however, for Anthropic, OpenClaw has become a "Trojan horse" for the enemy camp.
Anthropic believes that OpenClaw is no longer a purely efficiency tool, but an "intelligence gatherer" that extends into one's own backyard.
Now that the founder has become an OpenAI employee, your tools can no longer use my subscription quota.
Peter himself also spoke out helplessly, implying that Anthropic was "closing the door and beating the dog," and "freeloading" the open source community:
Dave Morin (a board member of OpenClaw) and I tried to persuade Anthropic to remain calm.
In the end, all we could manage was to postpone that day by a week.

https://x.com/steipete/status/2040209434019082522
Developers are in despair, their budgets have been blown up overnight.
For ordinary developers, this ban is like a "lower dimension attack".
Previously, many developers achieved extremely low-cost automated workflows by purchasing a fixed monthly subscription to Claude and using OpenClaw's powerful interface.
For $20, you can buy a Claude Pro package and have your lobster use Claude 24/7. The same usage via the API would likely cost thousands of dollars.
One is a Max subscription capped at $200, and the other is an API fee in the four figures.

Now, Anthropic has cut off that path himself.
Pay-as-you-go billing means it's no longer a monthly subscription plan, making costs extremely uncontrollable. Many small and medium-sized teams' AI budgets, which were originally locked in monthly, are now subject to sudden surges in orders.
Even worse, if you don't want to pay this exorbitant toll, you have to painfully reconstruct the entire business logic within 24 hours.
One word—absolutely amazing!
Lobsters stir up trouble; the feud has already begun.
The feud between the father of lobsters and Anthropic has been going on for a long time.
Steinberger once publicly complained that Anthropic's dealings with him "basically relied entirely on lawyer's letters."
The first move : brand segmentation .
At the end of January, a lawyer's letter forced Clawdbot to change its name.
The second tactic is technological blockade .
On January 9th, Anthropic quietly added a detection feature to its servers: subscription tokens not issued from the official Claude Code client are rejected outright.
The core gameplay of OpenClaw is to wipe your resources clean overnight.
The third tactic is to define the nature of the terms .
In mid-February, the Terms of Service were updated: using OAuth tokens from Free, Pro, and Max accounts in any third-party tool is considered a violation.
The most ruthless fourth move is to buy at the bottom using functional buy the dips.
Claude Cowork launched Dispatch, allowing users to remotely control the Claude desktop application via their mobile phones; Claude Code launched Channels, connecting Telegram and Discord.
Within four weeks, the core functionality of OpenClaw was replicated by the official team one-to-one!
In the words of AI blogger Matthew Berman, "They just built their own version of OpenClaw."
This morning, tech media outlet Semafor reported that when asked if customers were asking the company to create its own OpenClaw, Anthropic's Chief Business Officer, Paul Smith, admitted that they were.


https://www.semafor.com/article/04/03/2026/anthropic-eyes-its-own-version-of-openclaw
And today, April 4th, this email is the final blow.
A calculated move: forcefully promoting one's own "favorite son".
Claude Cowork
While blocking OpenClaw, Anthropic also revealed its true intentions.
They started subtly suggesting: Stop using those unreliable third-party libraries, and try our native Claude Cowork instead!

Claude Cowork allows Claude to have greater control over the coding environment and computer interface.
Actually, this isn't a feature unique to Anthropic.
This is precisely the " platform lock-in " strategy that tech giants excel at:
Step 1: Attract developers by leveraging third-party open-source tools to expand the ecosystem.
Step 2: Find excuses (such as security, infrastructure pressure, etc.) to block third parties.
Step 3: Force users to migrate to their own more expensive and more strictly controlled native integrated tools.
This typical "vertical integration" approach allows them to firmly control the entry point and user experience, while gradually causing the "uncontrolled connectors" in the ecosystem to lose their advantage.
Interestingly, OpenAI chose a completely opposite path around the same time.
OpenAI explicitly allows Codex subscriptions to be used in third-party clients such as OpenClaw.
In March, they went a step further, announcing that they would provide free access to ChatGPT Pro to maintainers of open-source projects, with OpenClaw specifically named as a beneficiary. OpenAI has taken all the people Anthropic wanted to get rid of!
No one is completely innocent.
To be fair, Anthropic has its merits.
Every user who subscribes for $200 and generates API usage worth thousands of dollars is causing the company to lose money.
Third-party tools that bypass official telemetry, impersonate client identities, and create monitoring blind spots pose real engineering and security risks.
Moreover, Steinberger is now sitting in the OpenAI office—in Anthropic's eyes, OpenClaw has gone from being a "slightly annoying freeloader" to a "spy sent from the other side."
Moreover, OpenClaw itself is not clean.
It was found to have a high-risk vulnerability (CVE-2026-25253) with a CVSS score of 8.8, which allows attackers to steal users' authentication tokens through a single link.

https://nvd.nist.gov/vuln/detail/CVE-2026-25253On the public internet, security agencies have detected more than 30,000 instances of OpenClaw with its portals wide open.
But these justifications cannot dispel the developers' feeling of being cheated.
The platform opens the door wide, waits for people to come in and build the building, and then announces other arrangements—this script has been played out too many times.
X killing third-party clients, Apple tightening the App Store, and Google cutting free APIs—it's all the same story.
Every time, it's the developers who get hurt.
The open ecosystem is entering its twilight, and AI giants are closing themselves off from the world.
Anthropic's most lethal move today made us realize a heartbreaking truth—
The golden age of AI, a free and open era for developers, is coming to an end.
We once thought AI would be like the early internet, based on shared protocols and evolving through community. But the reality is that model owners hold the power of life and death, able to wipe out the efforts of thousands at any time.
The countdown has now begun for the final ultimatum at 3 p.m. on April 4th.
Developers are faced with three brutal choices—
Is it a matter of swallowing the bitter pill, enduring exorbitant pay-as-you-go prices, and continuing to use OpenClaw until the budget runs out?
Or should we migrate entirely, kneel down to Anthropic's native tools, and accept platform lock-in?
Or perhaps, they will leave in anger, abandoning Claude completely.
Finally, while the giants are busy fighting amongst themselves, please don't forget that it was thousands upon thousands of developers who built your thrones, valued at hundreds of billions, line by line of code.
This time, Anthropic won the competition, but completely lost the community's trust!




