The Trump administration is preparing to order US agencies to partner with artificial intelligence companies to protect networks from AI-enabled cyber attacks, though the directive would stop short of requiring government approval for cutting-edge models, according to people familiar with the matter. A draft executive order from President Donald Trump would revamp existing cybersecurity information-sharing programs to include AI companies and address threats posed by the emerging technology, said the people, who spoke on condition of anonymity to discuss confidential proceedings. Those changes would make it easier to find and patch vulnerabilities across federal, state and local networks, as well as critical US infrastructure, without mandating new oversight of AI models, the people said. It's unclear how soon Trump will sign the measure, which is still subject to change. Details on the latest plan for the order -- and the evolution of policy ideas that had been considered but are unlikely to survive internal White House deliberations -- have not yet been reported. In response to a request for comment, a White House official characterized discussion of potential executive orders as speculation and said that any policy announcement would come directly from Trump. Spokespeople for top AI developers OpenAI and Anthropic PBC didn't immediately respond to messages seeking comment. The directive is taking shape about a month after Anthropic revealed that its breakthrough Mythos model was extraordinarily adept at finding network vulnerabilities and could pose a major cybersecurity risk. The company has limited Mythos access for now to a handful of large tech and Wall Street companies, amid broader global alarm about the new threats it could pose to critical systems. The planned order wouldn't require companies to allow the government to test their models before release, according to the people. Earlier thinking about mandating government approval of new AI systems is falling out of favor and the directive now emphasizes voluntary participation by developers, the people said. Trump's chief of staff, Susie Wiles, who has begun to take a more direct role in shaping AI policy, said late Wednesday that the US government would refrain from playing favorites in the AI race. "When it comes to AI and cyber security, President Trump and his administration are not in the business of picking winners and losers," Wiles wrote on X. "This administration has one goal; ensure the best and safest tech is deployed rapidly to defeat any and all threats." Trump administration officials have been pushing to make Mythos more widely available to federal agencies to test their networks for security flaws. White House officials recently rejected Anthropic's plans to distribute Mythos to several dozen additional companies and organizations, citing security concerns. The Mythos breakthrough has prompted administration officials to accelerate existing efforts to craft AI policy that would address a range of security issues. Wiles and other administration officials met on April 17 with Anthropic Chief Executive Officer Dario Amodei, where topics discussed included Mythos. Treasury Secretary Scott Bessent has also weighed in on the cyber risks Mythos presents, describing a role for the US in mitigating threats. "What we're determined to do is work with our AI companies to allow them to continue to innovate, but our charge to the US government is maintaining safety, and there is a very important calculus here between innovation and safety," he said on Fox News last Sunday. Speculation around whether the directive would mandate pre-deployment testing of models intensified this week following a New York Times report on plans for the order. National Economic Council Director Kevin Hassett signaled Wednesday that the measure would include a call for testing models before their release, likening it to screening for pharmaceuticals. The US already runs a voluntary program to evaluate AI systems before their release, and on Tuesday the Commerce Department announced an expansion of the initiative. Alphabet Inc.'s Google, Microsoft Corp. and xAI have agreed to give the US government access to their models to assess the systems' capabilities and help improve security. OpenAI and Anthropic were already part of the initiative, led by the department's Center for AI Standards and Innovation. Chris Lehane, OpenAI's head of global affairs, said this week the firm is allowing the center to test its upcoming GPT-5.5-Cyber model. "We're partnering with the White House and the broader administration on a responsible deployment strategy, including a playbook to help get these capabilities into the hands of federal, state, and local governments, allies, and critical infrastructure partners," he wrote on LinkedIn. Get the Morning & Evening Briefing Americas newsletters. Get the Morning & Evening Briefing Americas newsletters. Get the Morning & Evening Briefing Americas newsletters. Start every morning with what you need to know followed by context and analysis on news of the day each evening. Plus, Bloomberg Weekend. Start every morning with what you need to know followed by context and analysis on news of the day each evening. Plus, Bloomberg Weekend. Start every morning with what you need to know followed by context and analysis on news of the day each evening. Plus, Bloomberg Weekend. Plus Signed UpPlus Sign UpPlus Sign Up By continuing, I agree to the Privacy Policy and Terms of Service. Preparations for the Trump order are unfolding with the Pentagon still locked in a bitter dispute with Anthropic over the company's insistence on guardrails for military use of its technology. After contract talks broke down in February, defense officials declared the company a threat to the supply chain and outlined a six-month period to stop using Anthropic, which until early this year was the only provider of AI tools cleared for classified networks. The order wouldn't name any specific AI companies as partners, nor would it bar Anthropic from participating in information sharing related to cyber threats. Anthropic is fighting the supply-chain risk designation in court, and defense officials have signaled recently that they regard the cyber capabilities of the company's Mythos model as a distinct issue. Separately, US officials have also been preparing a memo for national security agencies' adoption of AI that would call for using multiple vendors to minimize supply-chain vulnerabilities and require contractors who work with the Pentagon to abide by the military's chain of command, Bloomberg has reported. Timing of that memo remains fluid, and the document wouldn't lift the Pentagon's supply-chain declaration. But its language would address some of the concerns raised by the Pentagon and Anthropic during increasingly tense contract negotiations. The US already facilitates public-private partnerships focused on cybersecurity similar to the one outlined by the planned order. These run primarily through the FBI and the Cybersecurity and Infrastructure Security Agency, which work with private companies to investigate hacks, mitigate damage and share intelligence to aid in attack recovery. Under that model, private companies agree to cooperate because it enhances their own security. The pending order would offer similar incentives for AI companies.
US Prepares AI Security Order That Omits Mandatory Model Tests
Source
Disclaimer: The content above is only the author's opinion which does not represent any position of Followin, and is not intended as, and shall not be understood or construed as, investment advice from Followin.
Like
Add to Favorites
Comments
Share
Relevant content






