Author: Swayam
Compiled by: TechFlow
The rapid development of Artificial Intelligence (AI) has given a few large tech companies unprecedented computing power, data resources, and algorithmic capabilities. However, as AI systems become increasingly integrated into our society, issues of accessibility, transparency, and control have become central to the technological and policy discussions. In this context, the combination of blockchain technology and AI provides us with an alternative path worth exploring - a potential new way to redefine the development, deployment, scaling, and governance of AI systems.
We are not aiming to completely overturn the existing AI infrastructure, but rather, through analysis, to explore the unique advantages that decentralized approaches may bring in certain specific use cases. At the same time, we acknowledge that in some contexts, traditional centralized systems may still be the more practical choice.
The following key questions have guided our research:
Can the core characteristics of decentralized systems (such as transparency and censorship resistance) be complementary to the requirements of modern AI systems (such as efficiency and scalability), or will they create conflicts?
In which aspects of the AI development pipeline - from data collection to model training and inference - can blockchain technology provide substantive improvements?
What technical and economic trade-offs will different components face in the design of decentralized AI systems?
Current Limitations in the AI Technology Stack
The Epoch AI team has made important contributions in analyzing the limitations of the current AI technology stack. Their research details the major bottlenecks that AI training compute capacity expansion may face by 2030, using Floating Point Operations per Second (FLoPs) as the core metric for measuring computational performance.
The research indicates that the scaling of AI training compute may be constrained by a variety of factors, including insufficient power supply, chip manufacturing technology bottlenecks, data scarcity, and network latency issues. Each of these factors sets a different upper limit on the achievable computational capacity, with latency being considered the most difficult theoretical barrier to overcome.
The chart highlights the necessity of progress in hardware, energy efficiency, unlocking data captured at the edge devices, and networking to support the future growth of artificial intelligence.
Power Constraints (Performance):
Feasibility of Scaling Power Infrastructure (2030 Projection): By 2030, the capacity of data center campuses is expected to reach 1 to 5 gigawatts (GW). However, this growth will require large-scale investments in power infrastructure and overcoming potential logistical and regulatory hurdles.
Constrained by energy supply and power infrastructure, the projected upper limit for global compute capacity expansion may reach up to 10,000 times the current level.
Chip Production Capacity (Verifiability):
Currently, the production of advanced chips (such as NVIDIA H100, Google TPU v5) supporting high-end computing is limited by packaging technologies (like TSMC's CoWoS). This constraint directly impacts the availability and scalability of verifiable computing.
Chip manufacturing and supply chain bottlenecks are the primary obstacles, but a growth of up to 50,000 times in computational capacity may still be possible.
Furthermore, enabling secure enclaves or Trusted Execution Environments (TEEs) on advanced chips at the edge is crucial. These technologies can not only verify computation results but also protect the privacy of sensitive data during the computation process.
Data Scarcity (Privacy):
Latency Barrier (Performance):
Inherent Latency Limitations in Model Training: As AI models continue to grow in scale, the time required for a single forward and backward pass increases significantly due to the sequential nature of the computation process. This latency is a fundamental limitation in the model training process that directly impacts training speed.
Challenges in Scaling Batch Size: To mitigate the latency issue, a common approach is to increase the batch size, allowing more data to be processed in parallel. However, there are practical limits to batch size expansion, such as insufficient memory capacity and diminishing marginal returns on model convergence as the batch size grows. These factors make it increasingly difficult to offset latency through batch size increases.
Foundations
The Decentralized AI Triangle
The various limitations facing AI (such as data scarcity, compute capacity bottlenecks, latency issues, and chip production capacity) collectively form the "Decentralized AI Triangle". This framework aims to balance privacy, verifiability, and performance - the three core attributes essential for the effectiveness, trustworthiness, and scalability of decentralized AI systems.
The following table provides a detailed analysis of the key trade-offs between privacy, verifiability, and performance, exploring their definitions, enabling technologies, and the challenges faced in each area:
Privacy: Protecting sensitive data is crucial in the AI training and inference processes. Key technologies used include Trusted Execution Environments (TEEs), Multi-Party Computation (MPC), Federated Learning, Homomorphic Encryption (FHE), and Differential Privacy. While effective, these techniques come with performance overhead, transparency issues that impact verifiability, and scalability limitations.
Verifiability: To ensure the correctness and integrity of computations, technologies such as Zero-Knowledge Proofs (ZKPs), Cryptographic Credentials, and Verifiable Computation are employed. However, balancing privacy and performance with verifiability often requires additional resources and time, potentially leading to computational delays.
Performance: Efficiently executing AI computations and enabling large-scale applications rely on distributed computing infrastructure, hardware acceleration, and high-performance networking. However, the adoption of privacy-enhancing techniques can slow down computation speeds, while verifiable computation also introduces additional overhead.
The Blockchain Trilemma:
The core challenge facing the blockchain domain is the Blockchain Trilemma, where each blockchain system must balance the following three aspects:
Decentralization: Distributing the network across multiple independent nodes to prevent any single entity from controlling the system.
Security: Ensuring the network is resistant to attacks and maintains data integrity, often requiring more verification and consensus processes.
Scalability: Rapidly and cost-effectively processing a large number of transactions, which typically means compromising on decentralization (reducing the number of nodes) or security (lowering the verification intensity).
For example, Ethereum prioritizes decentralization and security, resulting in relatively slower transaction processing speeds. For a deeper understanding of these trade-offs in blockchain architecture, you can refer to the relevant literature.
The AI-Blockchain Collaboration Matrix (3x3)
The integration of AI and blockchain is a complex process of trade-offs and opportunities. This matrix illustrates where these two technologies may create friction, find harmonious points of intersection, and at times amplify each other's weaknesses.
How the Collaboration Matrix Works
The collaboration intensity reflects the compatibility and impact of blockchain and AI properties in a specific domain. Specifically, it depends on how the two technologies work together to address challenges and enhance each other's functionality. For example, in the area of data privacy, the immutability of blockchain combined with the data processing capabilities of AI may lead to new solutions.
How the Collaboration Matrix Works
Example 1: Performance + Decentralization (Weak Collaboration)
In decentralized networks, such as Bitcoin or ETH, performance is often constrained by various factors. These limitations include the volatility of node resources, high communication latency, transaction processing costs, and the complexity of the consensus mechanism. For AI applications that require low latency and high throughput (such as real-time AI inference or large-scale model training), these networks are unable to provide sufficient speed and computational reliability to meet high-performance requirements.
Example 2: Privacy + Decentralization (Strong Collaboration)
Privacy-preserving AI technologies (such as federated learning) can fully leverage the decentralized nature of blockchain to achieve efficient collaboration while protecting user data. For example, SoraChain AI provides a solution that, through blockchain-supported federated learning, ensures that data ownership is not deprived. Data owners can contribute high-quality data for model training while retaining privacy, achieving a win-win situation for privacy and collaboration.
The goal of this matrix is to help the industry clearly understand the intersection of AI and blockchain, guiding innovators and investors to prioritize feasible directions, explore promising areas, and avoid getting caught in projects that are only speculative.
AI-Blockchain Collaboration Matrix
The two axes of the collaboration matrix represent different attributes: one axis is the three core characteristics of decentralized AI systems - verifiability, privacy, and performance; the other axis is the blockchain trilemma - security, scalability, and decentralization. When these attributes intersect, they form a series of synergies, ranging from high compatibility to potential conflicts.
For example, when verifiability and security are combined (high collaboration), a robust system can be built to prove the correctness and integrity of AI computations. But when performance requirements conflict with decentralization (low collaboration), the high overhead of distributed systems can significantly impact efficiency. Furthermore, some combinations (such as privacy and scalability) are in the middle ground, with both potential and complex technical challenges.
Why is this important?
Strategic Compass: This matrix provides clear direction for decision-makers, researchers, and developers, helping them focus on high-collaboration areas, such as using federated learning to ensure data privacy or leveraging decentralized computing to achieve scalable AI training.
Focusing on Impactful Innovation and Resource Allocation: Understanding the distribution of collaboration intensity (such as security + verifiability, privacy + decentralization) helps stakeholders concentrate resources on high-value areas, avoiding waste on weak collaborations or unrealistic integrations.
Guiding the Evolution of the Ecosystem: As AI and blockchain technologies continue to evolve, this matrix can serve as a dynamic tool to evaluate emerging projects, ensuring they align with real-world needs rather than fueling excessive hype.
The following table summarizes these attribute combinations by collaboration intensity (from strong to weak), and explains their practical operation in decentralized AI systems. The table also provides examples of innovative projects that showcase these combinations in real-world applications. Through this table, readers can gain a more intuitive understanding of the intersection of blockchain and AI technologies, identify truly impactful areas, and avoid those that are overhyped or technically infeasible.
AI-Blockchain Collaboration Matrix: Classifying Key Intersection Points of AI and Blockchain Technologies by Collaboration Intensity
Conclusion
The combination of blockchain and AI holds immense transformative potential, but future development requires clear direction and focused efforts. Projects that are truly driving innovation are shaping the future of decentralized intelligence by solving key challenges such as data privacy, scalability, and trust. For example, federated learning (privacy + decentralization) achieves collaboration while protecting user data,distributed computing and training (performance + scalability) improve the efficiency of AI systems, and zkML (zero-knowledgemachine learning, verifiability + security) provides a guarantee for the trustworthiness of AI computations.
At the same time, we need to approach this field with a cautious attitude. Many so-called AI "intelligent agents" are simply simple wrappers of existing models with limited functionality, and their integration with blockchain also lacks depth. True breakthroughs will come from projects that fully leverage the respective strengths of blockchain and AI, and are committed to solving real-world problems, rather than simply chasing market hype.
Looking ahead, the AI-Blockchain Collaboration Matrix will become an important tool for evaluating projects, effectively helping decision-makers distinguish truly impactful innovations from meaningless noise.
The next decade will belong to projects that can combine the high reliability of blockchain with the transformative capabilities of AI to solve real-world problems. For example, energy-efficient model training will significantly reduce the energy consumption of AI systems; privacy-preserving collaboration will provide a safer environment for data sharing; and scalable AI governance will drive the large-scale, more efficient deployment of intelligent systems. The industry needs to focus on these key areas to truly unlock the future of decentralized intelligence.