Anthropic and the Pentagon restart AI negotiations: striving to maintain Claude defense cooperation, but adhering to ethical red lines.

This article is machine translated
Show original

Dario Amodei, CEO of the American artificial intelligence company Anthropic, recently restarted negotiations with the U.S. Department of Defense (Pentagon) in a final effort to secure contract terms for the military use of its AI model, Claude. This move comes after negotiations broke down last week and the Pentagon designated Anthropic as a "supply chain risk," aiming to prevent Anthropic from being excluded from U.S. military cooperation while maintaining its core ethical principles.

Negotiation Background and Red Line Adherence

Anthropic signed a pilot contract worth up to $200 million with the Pentagon last year, making Claude the first advanced AI model approved for deployment on a classified network. However, the Trump administration subsequently demanded changes to the terms, insisting that AI must be applicable to "any lawful use" without any restrictions.

Anthropic, however, insisted on two "red lines": prohibiting the use of Claude for mass domestic surveillance of the American people, and prohibiting its use in fully autonomous weapons (systems that can select and attack targets without human intervention). Anthropic believes that such uses could jeopardize democratic values ​​and that current AI technology is not yet reliable enough. However, Defense Secretary Pete Hegseth issued an ultimatum demanding compromise within a specified timeframe, threatening to terminate the contract and enforce it by invoking relevant regulations, causing negotiations to nearly collapse last Friday.

Latest developments in resuming negotiations

According to sources familiar with the matter, Amodei is currently in direct talks with Emil Michael, Under Secretary of Defense for Research and Engineering, with the goal of reaching a compromise that would allow the U.S. military to continue using Claude while significantly reducing the risk of Anthropic being officially blacklisted.

If successful, the new contract could ease tensions and impact the competitive landscape of the AI ​​industry—for example, OpenAI has already reached an agreement with the Pentagon but is also adjusting the terms to include similar restrictions. Anthropic, on the other hand, emphasizes its continued willingness to cooperate but will not abandon its core safeguards; if a consensus cannot be reached, the company will assist in a smooth transition to other suppliers to avoid affecting military operations.

Significance of the event and future prospects

This controversy highlights the tug-of-war between AI companies and the military over ethics, security, and national security. Anthropic, positioning itself as a "responsible AI" company, refuses to make unconditional concessions, reflecting the deep concerns of some Silicon Valley companies about the military applications of AI; the Pentagon, on the other hand, emphasizes battlefield flexibility and technological superiority.

If negotiations proceed smoothly, it could set a new precedent for the military application of AI; conversely, it could exacerbate industry fragmentation. Industry insiders expect an official statement or new developments to emerge soon.

Source
Disclaimer: The content above is only the author's opinion which does not represent any position of Followin, and is not intended as, and shall not be understood or construed as, investment advice from Followin.
Like
Add to Favorites
Comments