
From June 26 to 27, CertiK, the world's largest Web3 security company, appeared at the Istanbul Blockchain Week (IBW 2025), where Chief Business Officer Jason Jiang attended two roundtable forums and shared CertiK's cutting-edge observations and security insights in the convergence of AI and Web3. He was on the same stage with experts such as Nurettin Erginoz, Head of Cybersecurity Services at PwC Turkey, and Charlie Hu, co-founder of Bitlayer, discussing the current state and security challenges of AI technology in DeFi.
At the forum, Jason Jiang pointed out that with the rapid development of large language models (LLM) and AI agents, a brand new financial paradigm - DeFAI (Decentralized AI Finance) is gradually taking shape. However, this transformation also brings new attack surfaces and security concerns.
"DeFAI has broad prospects, but it also requires us to re-examine the trust mechanism in decentralized systems," Jason Jiang stated: "Unlike smart contracts based on fixed logic, the decision-making process of AI agents is influenced by context, time, and even historical interactions. This unpredictability not only exacerbates risks but also creates opportunities for attackers."
"AI agents" are essentially intelligent entities capable of autonomous decision-making and execution based on AI logic, typically authorized to run by users, protocols, or DAOs. The most typical representative is AI trading robots. Currently, most AI agents run on Web2 architecture, relying on centralized servers and APIs, making them vulnerable to injection attacks, model manipulation, or data tampering. Once hijacked, they can not only potentially cause financial losses but also affect the stability of the entire protocol.
When sharing specific attack cases, Jason Jiang described a typical scenario: when an AI trading agent running for a DeFi user is monitoring social media messages as trading signals, attackers can post false alerts, such as "Protocol X is under attack", which might prompt the agent to immediately initiate emergency liquidation. Such an operation would not only lead to user asset losses but also trigger market fluctuations that attackers could exploit through front running.

Jason Jiang believes that the security of AI agents should not be the sole responsibility of one party, but a shared responsibility of users, developers, and third-party security organizations.
First, users need to be clear about the scope of permissions the agent possesses, carefully grant permissions, and pay attention to reviewing high-risk operations of AI agents. Second, developers should implement defensive measures during the design phase, such as prompt hardening, sandbox isolation, rate limiting, and fallback logic. Third-party security companies like CertiK should provide independent reviews of AI agent model behaviors, infrastructure, and on-chain integration methods, and collaborate with developers and users to identify risks and propose mitigation measures.
Jason Jiang warned: "If we continue to treat AI agents as a 'black box', security incidents in the real world are just a matter of time."
For developers exploring the DeFAI direction, Jason Jiang's advice is: "Just like smart contracts, AI agent behavior logic is also implemented through code. Since it is code, it can be attacked, and therefore requires professional security audits and penetration testing."