The White House is XEM a plan to test powerful artificial intelligence (AI) models before they are released to the public, according to reports published on May 5, 2026.
This new proposal could mark a major shift in U.S. AI policy. If implemented, the federal government would be directly involved in evaluating advanced AI models before they are widely released or applied to government systems.
The discussions reportedly revolve around a new executive order. This plan could establish an AI working group comprising government representatives, national security agencies, and technology industry leaders.
Will Trump become the "gatekeeper" of AI?
The immediate concern is security. Reports indicate that officials are worried about the risks that advanced AI models could pose to users the ability to detect software vulnerabilities, write malicious code, or accelerate cyberattacks.
One model that is attracting attention is Claude Mythos of Anthropic. Cybersecurity experts warn that Claude Mythos's programming capabilities could make planning and executing sophisticated attacks easier.
However, the White House has yet to Unconfirmed the final policy. Officials said that information regarding the new executive order is only speculation, and any official decision will be announced by President Donald Trump.
The biggest risk is over-intervention. If pre-launch testing is required, AI development could slow down, create political pressure before each new model launch, and give Washington unprecedented control over private technology.
However, security concerns are not weak. If an AI model truly enhances cyberattack capabilities, the government has every reason to control how and who accesses this model.
The key lies in the scope of control. If it's limited to models serving national security purposes and applications within government agencies, it will be more acceptable. Conversely, if it's extended to all large-scale AI models, this approval process will become highly controversial.
Recently, the cryptocurrency sector has also presented a similar example. In January 2025, Trump established a digital asset working group to coordinate policy across agencies. This group subsequently contributed to shaping US government policy on crypto, including regulations on stablecoins and agency-level actions.
This history is noteworthy. Task forces established by Trump often begin as advisory bodies, but can later become full-fledged policymaking machines. If the plan to control AI is implemented, it would be the first test of the extent to which the Trump administration can control advanced AI technologies before they are released to the market.





