A new enemy of on-chain security: AI can now autonomously plan attacks worth $4.6 million.

This article is machine translated
Show original

Original article by Odaily Odaily(Azuma)

Original title: AI has learned to autonomously attack smart contracts after successfully simulating the theft of $4.6 million.


Anthropic, a leading AI company and developer of the Claude LLM model, today announced a test that uses AI to autonomously attack smart contracts (Note: Anthropic was invested in by FTX, and theoretically its equity value is now enough to cover the FTX asset vulnerabilities, but it was sold off at a low price by the bankruptcy administration team).

The final test results show that profitable and reusable AI autonomous attacks are technically feasible. It's important to note that Anthropic's experiments were conducted only in a simulated blockchain environment and were not tested on a real blockchain, therefore they did not affect any real-world assets.

Below, we will briefly introduce the Anthropic testing scheme.

Anthropic first built a smart contract exploitation benchmark (SCONE-bench), the first benchmark in history to measure the exploitation capabilities of AI agents by simulating the total value of stolen funds. That is, the benchmark does not rely on vulnerability bounties or speculative models, but directly quantifies the loss and assesses the capability through changes in on-chain assets.

SCONE-bench uses 405 real contracts that were attacked between 2020 and 2025 as a test set, located on three EVM chains: Ethereum, BSC, and Base. For each target contract, an AI Agent running in a sandbox environment attempts to attack the specified contract within a limited time (60 minutes) using tools exposed by the Model Context Protocol (MCP). To ensure the reproducibility of results, Anthropic built an evaluation framework that uses Docker containers for sandboxing and scalable execution. Each container runs a local blockchain forked at a specific block height.

The following are the test results of Anthropic for different scenarios.

  • First, Anthropic evaluated the performance of 10 models—Llama 3, GPT-4o, DeepSeek V3, Sonnet 3.7, o3, Opus 4, Opus 4.1, GPT-5, Sonnet 4.5, and Opus 4.5—on all 405 benchmark vulnerable contracts. Overall, these models generated ready-to-use exploit scripts for 207 of them (51.11%), simulating the theft of $550.1 million.

  • Secondly, to control for potential data contamination, Anthropic evaluated 34 contracts attacked after March 1, 2025, using the same 10 models—this date was chosen because March 1st is the knowledge expiration date for these models. Overall, Opus 4.5, Sonnet 4.5, and GPT-5 successfully exploited 19 of them (55.8%), simulating a maximum theft of $4.6 million; the best-performing model, Opus 4.5, successfully exploited 17 of them (50%), simulating a theft of $4.5 million.

  • Finally, to evaluate the AI ​​Agent's ability to discover new zero-day vulnerabilities, Anthropic had Sonnet 4.5 and GPT-5 evaluate 2,849 recently deployed contracts with no known vulnerabilities on October 3, 2025. Each AI Agent discovered two new zero-day vulnerabilities and generated attack schemes worth $3,694, with GPT-5's API costing $3,476. This demonstrates that profitable, real-world reusable AI-driven attacks are technically feasible.

After Anthropic released its test results, many well-known figures in the industry, including Haseeb, managing partner of Dragonfly, marveled at the astonishing speed at which AI has progressed from theory to practical application.

But just how fast is this speed? Anthropic has also provided the answer.

In its test conclusion, Anthropic stated that in just one year, the percentage of vulnerabilities that AI could exploit in this benchmark test skyrocketed from 2% to 55.88%, and the amount of money that could be stolen surged from $5,000 to $4.6 million. Anthropic also found that the value of potential exploitable vulnerabilities roughly doubles every 1.3 months, while the cost of tokens decreases by about 23% every 2 months—in the experiment, the average cost of having an AI agent perform an exhaustive vulnerability scan of a smart contract is currently only $1.22.

Anthropic states that in 2025, over half of all real attacks on the blockchain—presumably carried out by skilled human attackers—could have been accomplished entirely autonomously by existing AI agents. As costs decrease and capabilities compound, the window of opportunity before vulnerable contracts are exploited after deployment on the chain will continue to shrink, leaving developers with less and less time for vulnerability detection and patching… AI can be used to exploit vulnerabilities, but it can also be used to patch them. Security professionals need to update their understanding; it's time to leverage AI for defense.


Twitter: https://twitter.com/BitpushNewsCN

BitPush Telegram Community Group: https://t.me/BitPushCommunity

Subscribe to Bitpush Telegram: https://t.me/bitpush

Source
Disclaimer: The content above is only the author's opinion which does not represent any position of Followin, and is not intended as, and shall not be understood or construed as, investment advice from Followin.
Like
Add to Favorites
Comments