Table of Contents
ToggleThe U.S. military is weaponizing artificial intelligence (AI) at an unprecedented pace.
The U.S. Department of Defense (Pentagon) announced on Friday (1st) that it has reached agreements with seven leading technology companies, allowing the military to use their AI infrastructure and models on classified computer networks in an effort to gain an overwhelming advantage in the ever-changing modern battlefield.
Seven major players join the ranks; the AI system GenAI.mil is launched.
This impressive list of military-industrial complexes includes: Google, Microsoft, Amazon AWS, Nvidia, OpenAI, Reflection, and SpaceX under Elon Musk.
The Department of Defense stated that these companies' resources will help "enhance the decision-making capabilities of warriors in complex operational environments." The Pentagon noted that military personnel are already using these AI functions through an official platform called GenAI.mil , significantly reducing tedious tasks that previously took months to complete to within days.
Pentagon Chief of Technology Emil Michael specifically named Nvidia and the startup Reflection. Both companies provide "open-source AI models," which Michael considers a strategic priority, aiming to create an "American alternative" to counter China's rapidly developing open-source AI systems.
Anthropic clashes with Trump administration over "AI killing" claims.
However, the absence of AI giant Anthropic stands out starkly in this celebration by the military and Silicon Valley.
According to reports, Anthropic crossed ethical lines during negotiations. The company demanded explicit guarantees in the contract ensuring that the U.S. military "would not use its technology for autonomous weapons or for the surveillance of U.S. citizens."
This demand thoroughly angered top officials in Washington. Defense Secretary Pete Hegseth took a hard line, demanding that the company allow the Pentagon to use its technology for "any legitimate purpose." Subsequently, President Donald Trump attempted to impose a blanket ban on federal agencies' use of Anthropic's chatbot, Claude; Hegseth further attempted to label the company a "supply chain risk." Ultimately, Anthropic chose to take the Trump administration to court.
OpenAI seizes the opportunity to fill the gap, but experts warn of "automation bias".
After Anthropic's departure, OpenAI quickly reached an agreement with the Pentagon in March of this year, replacing Anthropic with ChatGPT. In a statement on Friday, OpenAI made no secret of its support for the military: "As we said when we announced the agreement months ago, we firmly believe that those who defend America should have the best tools in the world." However, sources also revealed that OpenAI also secured a clause in the contract stipulating that "human supervision is required when AI acts autonomously."
As AI rapidly permeates command systems, ethical and risk issues are emerging. Helen Toner, acting executive director of Georgetown University's Center for Security and Emerging Technologies (and former board member of OpenAI), warns that while AI excels at summarizing intelligence and surveillance footage, the military must not over-rely on this technology.
There is a phenomenon called "automation bias," where people tend to assume that machines perform better than they actually do.
Drawing on the recent case of Israel's heavy reliance on AI to mark targets in the Gaza-Lebanon conflict, which resulted in numerous civilian casualties, striking a balance between pursuing strategic advantage and upholding basic human morality will be the most severe test left to all humanity in the AI era.






