A top-secret memo has been leaked, in which Dario Amodei completely shatters OpenAI's image, angrily denouncing it as a "security charade" for all to see. However, it's undeniable that the US State Department is largely abandoning Claude in favor of GPT-4.1.
I finally couldn't hold it in anymore!
After Claude was completely banned, Dario Amodei wrote the "craziest" internal memo in Silicon Valley history—
The deal between OpenAI and the Pentagon was purely a "safety theater."
"They're just putting on an act, trying to fool the whole world."
Amodei also bluntly stated that the US government dislikes Anthropic mainly because they are unwilling to bow down and fawn over it.
He even directly named OpenAI President Greg and his wife, saying they had donated $25 million to Trump's Super PAC.
This 1,600-word memo reveals Amodei's current anxiety in every sentence.
In order to retain their government contracts, more than a dozen defense technology companies have now had to issue an ultimatum to their employees: stop using Claude immediately!
Moreover, Anthropic's multi-billion dollar orders with Google, Amazon, and Microsoft will face rigorous scrutiny.
Meanwhile, OpenAI, which has successfully "taken over," has already tasted the initial success.
The US government's self-developed chatbot, StateChat, switched its underlying big model from Claude to GPT-4.1 overnight.
Yes, the original text doesn't say 5.1, it says 4.1.
It is no exaggeration to say that the relationship between Anthropic and OpenAI is extremely tense, with the air thick with the smell of gunpowder.
Slamming Ultraman's "safety show"
Anthropic CEO erupts in fury
In this lengthy letter to 2,000 employees, Dario Amodei launched a full-scale attack on his arch-rivals OpenAI and Ultraman.
Amodei bluntly pointed out that the deadlock did not stem from technical disagreements, but rather from Anthropic's refusal to "curry favor" with the current government.
Even more provocative, he directly criticized OpenAI for engaging in "dictator-style praise."
In the article, he himself expressed deep skepticism about the security measures that OpenAI had been touting.
Altman has repeatedly stated publicly that OpenAI and the Department of Defense have set "red lines," and has also added several clauses.
But a lengthy article on CNBC directly shattered Ultraman's "hypocritical" facade—
The contract explicitly states that the Pentagon can use AI for "all legitimate purposes".
Altman also stated at an internal all-staff meeting that OpenAI has no authority to decide how the Department of Defense should use the GPT model.
Amodei stated that while Anthropic was in a difficult situation, Altman took advantage of the situation and attempted to weaken public support to give the government more confidence to take harsher measures against them.
The real reason why the DoW and the Trump administration dislike Anthropic is:
• We didn't donate money to Trump (while OpenAI and Greg donated a lot).
• We didn't sing praises to Trump (but Ultraman did);
• We support AI regulation that is detrimental to their agenda;
• We spoke the truth on many AI policy issues (such as unemployment);
• We adhered to the red line and did not collude with them to engage in any "safety show" to deceive our employees.
The efficacy is only 20%, the rest is all for show.
In the memo, Palantir had proposed a potential solution to Anthropic on behalf of the Pentagon in negotiations:
A "classifier" or machine learning system is used to determine whether the Anthropic red line has been crossed.
However, Amodei points out that AI models are very easy to "jailbreak".
Furthermore, Anthropic had considered having employees monitor the deployment, but this monitoring mechanism could only cover "extremely limited" use cases.
He bluntly stated that these methods are not entirely useless.
However, in the context of military applications, it may only have 20% practical effect, with the remaining 80% being purely a "security show."
This statement directly refutes Ultraman's claims, revealing that the "red line" settings are merely self-deception, with their actual effectiveness being only 20%.
Amodei also revealed more inside information, stating that the "security layer" that Palantir was selling to Anthropic (including OpenAI) was purely for show.
As negotiations neared their conclusion last week, the Pentagon offered to accept the terms if the company removed specific clauses concerning "analyzing bulk data."
But Anthropic did not break his principles and refused to compromise.
"A group of people who are very easy to fool."
What's even more interesting is that Amodei also mentioned that this kind of manipulative and distorted narrative doesn't work very well with the general public or the media.
However, he added that this was quite effective against some of the "brain-dead" mentors on X.
His biggest concern is how to ensure that this approach won't fool OpenAI employees as well.
Due to the selection effect, they are a group of people who are easily fooled, but it is especially important to fight back against the brainwashing tactics that Ultraman sells to his employees.
Claude was forced to resign and urgently needed GPT-4.1.
The power structure of Silicon Valley's AI circle was shaken up overnight.
Now, the U.S. State Department, Treasury Department, and Department of Health and Human Services (HHS) —three cabinet-level agencies—have all made official announcements:
Stop using all Anthropic AI products.
This marks another group of core government departments, following the US military, officially joining the camp of OpenAI and Google.
The source of this "major purge" points directly to last week's "total ban" order.
Because Anthropic and the Pentagon failed to reach an agreement on military AI security, Trump directly issued an order—
All federal government agencies across the United States must terminate their contracts with Anthropic within six months.
Following this, Treasury Secretary Scott Bessent, the Federal Housing Finance Agency (FHFA), and Fannie Mae and Freddie Mac completely "removed" Claude.
This series of actions was undoubtedly a heavy blow to Anthropic.
Once a leading company in the field of national security AI in the United States, it is now facing "public disdain" from the US government.
Reuters has obtained an exclusive report stating that the Department of Health (HHS) has formally notified its staff that the ban is in effect and requires them to switch to ChatGPT or Gemini.
Even StateChat, the chatbot developed by the U.S. State Department, urgently switched Claude to GPT-4.1 .
Previous reports suggested that the US military's Claude missile might be at the Opus 5.5 level; in comparison, the GPT-4.1 seems incredibly small.
State Department spokesman Tommy Pigott said in an email, "In order to implement the President's directive to cancel the Anthropic contract, we are taking immediate action to ensure that all projects are fully compliant."
GPT-4.1 will be released in April 2025.
Defense technology circles sever ties with Claude overnight to protect themselves.
As if overnight, Anthropic became a discarded pawn in the defense technology sector, easily "abandoned".
In order to secure government contracts, a large number of defense technology companies have ordered their employees to stop using Claude and switch to other LLM programs.
Alexander Harstrick, a managing partner at J2 Ventures, revealed that 10 firms in his portfolio have urgently replaced Claude .
Even defense industry giants like Lockheed Martin have been reported to be working to completely remove Anthropic from their supply chain.
Harstrick admitted, "Companies are replacing Claude not because the product is bad, but purely to avoid risk."
This was a complete shock to Anthropic.
Just this January, Dario Amodei said, "80% of our revenue comes from enterprise clients."
In late 2024, Anthropic successfully entered the Department of Defense ecosystem through a partnership with software service provider Palantir.
Back in the day, Claude was the first AI model to break into a government's classified network and even signed a $200 million deal. Who would have thought the tide would turn so quickly?
This Pentagon ban has abruptly severed the deep partnership between Palantir, Silicon Valley's most secretive military-industrial AI powerhouse, and Claude.
Of the nearly $4.5 billion in revenue that surged last year, software giant Palantir received a staggering 42% directly from the U.S. government.
Over the past year, Palantir has not only integrated Claude into its system, but has also made in-depth customizations specifically for its core analytics and database platform.
The binding of the two is not a simple interface call, but a deep, fundamental fusion.
Now, faced with the choice of either completely removing Claude or having its contract with the government fall through, Palantir has unhesitatingly chosen the latter.
A venture capitalist in the defense technology sector said that any legitimate company doing business with the federal government would not bet on just one supplier.
Therefore, stopping Anthropic shouldn't cause any major problems.
To this end, Anthropic continues to adhere to its "two bottom lines" of security, refusing to allow the U.S. military to use Claude for autonomous weapon positioning and personnel surveillance.
Meanwhile, its old rival OpenAI is clearly much more "sensible".
It not only quickly filled the gap in the Department of Defense, but Altman also flexibly modified the terms, promising not to "intentionally" spy on American citizens.
Furthermore, the US government has also signed contracts with Google, Musk xAI, and Grok, respectively, for Gemini and Grok.
Billions of dollars, facing scrutiny
Not only are collaborations with defense technology companies severely affected, but even Anthropic's partnerships with major tech companies may be "cut off."
The characterization of "supply chain risk" has raised concerns among Anthropic's partners.
As is well known, Amazon was once Anthropic's biggest financial backer, investing $8 billion in it.
Most of Anthropic's AI services, which it sells directly to enterprise customers, run on AWS servers.
In return, Anthropic will also share a portion of the revenue generated from selling Claude AI to AWS customers with Amazon.
Moreover, Google has invested billions of dollars in it and runs Claude on its cloud servers.
Last year, it was reported that Anthropic was willing to spend $25 billion to buy nearly 1 million TPU v7 chips.
Anthropic is expected to pay at least $80 billion to Amazon, Google and Microsoft by 2029 to run Claude AI on its cloud servers, a figure far higher than the approximately $3 billion paid last year.
However, if the "supply chain risk" label is firmly attached, Anthropic will face annihilation.
This means that core partners, including Amazon, Google, and Microsoft, may be forced to sever ties with them.
A thousand-word memo exposed
Finally, here is the full text of Amodei's thousand-word memo, which has been edited for clarity:
Scroll up and down to view
Reference: YHK
https://www.theinformation.com/articles/anthropic-ceo-told-employees-openai-pentagon-deal-safety-theater
https://www.theinformation.com/articles/read-anthropic-ceos-memo-attacking-openais-mendacious-pentagon-announcement
https://www.theinformation.com/articles/next-anthropics-showdown-defense-department
https://www.reuters.com/business/us-treasury-ending-all-use-anthropic-products-says-bessent-2026-03-02/
This article is from the WeChat official account "New Zhiyuan" , author: New Zhiyuan, and published with authorization from 36Kr.


