As AI agent technology rapidly expands in Silicon Valley, with investments reaching $8.2 billion in 2024, these autonomous systems are gradually penetrating financial, infrastructure, and decision-making domains. However, a critical yet often overlooked question emerges behind this technological revolution: "How can we verify if AI agents are telling the truth?"
Silicon Valley poured $8.2B into AI agents last year.
Soon, they will control our money, infrastructure, and decision-making.
But there's one problem no one's talking about:
How can we verify if AI agents are telling the truth? pic.twitter.com/zEj7z5mGyX
— Sergey Gorbunov (@sergey_nog) April 22, 2025
Table of Contents
ToggleSilicon Valley Heavily Invests in AI Agents, But Trust Foundation Remains a 'Black Box'?
Chainlink co-founder Sergey Gorbunov pointed out yesterday that while AI agents are packaged as autonomous systems capable of independently completing complex tasks, most still operate as a "black box" - users cannot know the internal decision-making process and can only choose to blindly trust:
Truly autonomous AI agents should simultaneously possess the characteristics of being "unstoppable" and "verifiable", yet current systems often do not meet this standard.
(AI World's USB-C Interface: What is Model Context Protocol (MCP)? Decoding the Universal Context Protocol for AI Assistants)
Why is 'Verifiability' the Real Security Guarantee?
Gorbunov emphasized that verifiability means AI agents need to be able to clearly explain "What did it do? How did it do it? Did it follow the predetermined rules?":
Without these mechanisms, when AI agents gain control of critical infrastructure, it could potentially create enormous risks. This "verification gap", if not properly addressed, could become a hidden danger in technological development.
Three Types of AI Agents with Different Verification Needs
According to EigenLayer founder Sreeram Kannan, AI agents can be categorized into three types based on their service targets:
Personal Agents: Primarily serving individuals, such as digital assistants, with relatively low verification requirements.
Commons Agents: Serving communities, requiring medium-intensity verification to ensure fairness and credibility.
Sovereign Agents: Completely independent of human operation, requiring the highest level of verification capabilities.
In the next five years, these sovereign agents might control trillions of dollars in assets. Without mature verification mechanisms, it would be like "building a house on quicksand".
Three-Level Verification System: Rebuilding the Trust Foundation of AI Agents
To solve the verification problem, Kannan proposed a three-level verification framework:
Proactive Verification: Assessment before task execution.
Retroactive Verification: Reviewing correctness after task completion.
Concurrent Verification: Continuous monitoring and recording during task execution.
This framework can make AI behavior transparent, thereby enhancing trust.
From Insurance Claims to Predictive Markets: Practical Applications of Verifiable AI
Kannan also mentioned the potential application of verifiable AI agents in insurance claims, where the current insurance industry, with a single company handling both issuance and review, often triggers trust crises:
Through verifiable AI agents, the claims process can be transformed into an independent review, executed and audited with transparent mechanisms, enhancing fairness and credibility.
Moreover, platforms like EigenBets combining ZK-TLS and verifiable inference layer technologies can make prediction markets more transparent and reduce dependence on centralized authorities.
Block Chain + AI: Crafting the Future Ticket for AI Agents
Facing increasingly complex AI systems, Gorbunov believes blockchain technology can provide the necessary cryptographic trust foundation and help establish a robust verification framework:
AI agents combined with blockchain can not only enhance credibility and flexibility but also make smart contracts truly "intelligent", paving the way for future AI applications.
At the end, Gorbunov also shared a link to his YouTube program《The Future of AI》, emphasizing that key developments for AI agents in the future are not just about building more powerful models, but about being able to:
Prove the outcomes of their actions
Transparently present reasoning processes
Gain trust through cryptographic mechanisms
He emphasized: "Only by achieving these three goals can AI agents operate safely in future systems."
Risk Warning
Cryptocurrency investments carry high risks, with prices potentially experiencing extreme volatility, and you may lose all your principal. Please carefully assess the risks.





