Author: Archetype Source: X, @archetypevc Translator: Shaneoba, Jinse Finance
1. Interaction Between Intelligent Agents
The transparency and composability of blockchain make it an ideal substrate for interaction between intelligent agents. In this scenario, intelligent agents developed by different entities for different purposes can interact seamlessly. Currently, there are already many experimental applications between intelligent agents, such as fund transfers and joint token launches. We look forward to seeing how the interaction between intelligent agents can be further expanded, including creating new application domains (e.g., new social scenarios driven by agent interaction) and improving current cumbersome enterprise workflows, such as platform authentication and verification, micropayments, and cross-platform workflow integration.
— Danny, Katie, Aadharsh, Dmitriy

2. Decentralized Intelligent Agent Organizations
Coordinating large-scale multi-agent systems is another exciting research area. How do multi-agent systems collaborate to complete tasks, solve problems, and govern systems and protocols? In an article in early 2024 on the "Promises and Challenges of Crypto+AI Applications", Vitalik proposed using AI agents for prediction markets and arbitration. He believes that multi-agent systems have extraordinary "truth" discovery capabilities and autonomous governance potential when scaled. We look forward to seeing how the potential of "multi-agent systems" and "collective intelligence" can be further explored and experimented with.
As an extension of coordination between intelligent agents, coordination between intelligent agents and humans also provides interesting design space - how communities can interact around intelligent agents, or how intelligent agents can organize humans for collective action. We look forward to seeing more experiments, especially those with objective functions involving large-scale human coordination. This will require some verification mechanism, especially if human work is done off-chain, but this could potentially generate some peculiar and interesting emergent behaviors.
— Katie, Dmitriy, Ash
3. Intelligent Agent-Based Multimedia Entertainment
The concept of digital personas has existed for decades. For example, Hatsune Miku (2007) once sold out a 20,000-seat concert, and the virtual influencer Lil Miquela (2016) has over 2 million Instagram followers. More recent examples include the AI virtual streamer Neuro-sama (2022), who has over 600,000 Twitch subscribers, and the anonymous K-pop virtual boy band PLAVE (2023), which has accumulated over 300 million views on YouTube in less than two years.
With the advancement of AI infrastructure and the integration of blockchain in payments, value transfer, and open data platforms, we look forward to seeing how these intelligent agents can become more autonomous and potentially unlock a new mainstream entertainment category by 2025.
— Katie, Dmitriy

4. Generative/Intelligent Agent Content Marketing
In the previous category, the intelligent agent itself is the product, while here, the intelligent agent can be a complement to the product. In the attention economy, continuously producing engaging content is crucial for the success of any idea, product, or company. Generative/intelligent agent content is a powerful tool that teams can use to build scalable 24/7 content production pipelines. The development of this field has been driven by discussions around "what differentiates meme coins and intelligent agents". Even if current meme coins are not strictly "intelligent", intelligent agents have become an important tool for them to acquire distribution channels.
Another example is that games often need to become more dynamic to maintain user engagement. A classic way to create game dynamics is to cultivate user-generated content; pure generative content (including in-game items, NPCs, and even fully generated game levels) is likely the next stage of this evolution. We are curious to see how far the boundaries of traditional distribution strategies can be expanded through the capabilities of intelligent agents by 2025.
— Katie
5. Next-Generation Art Tools/Platforms
In 2024, we launched IN CONVERSATION WITH, a series of interviews with crypto artists in music, visual arts, design, and curation. This year's interviews have made me observe a key point: artists interested in crypto technology often have broad interests in emerging technologies and tend to make these technologies the core or aesthetic focus of their creative practices, such as AR/VR objects, code-based art, and live coding.
Generative art has long had a natural synergy with blockchain, which also makes it a potential carrier for AI art. Displaying and presenting these art media on traditional art exhibition platforms is extremely difficult. ArtBlocks has provided us with a window to see how blockchain can be used to present, store, monetize, and protect digital artworks - while also improving the overall experience for artists and audiences.
Beyond just display, AI tools have even expanded the ability of ordinary people to create art. We look forward to seeing how blockchain can further expand or support these tools in 2025, empowering artists and art enthusiasts.
— Katie
6. Data Marketplaces
Since Clive Humby proposed "data is the new oil" 20 years ago, major companies have taken strong measures to monopolize and monetize user data. Today, users are aware that their data is the foundation on which these multi-billion dollar companies are built, but they have little control over the data and almost no ability to share in the profits it creates. As powerful AI models accelerate, this tension becomes increasingly critical. If one part of the opportunity in data markets is reducing user data exploitation, the other part is solving the data supply shortage problem, as increasingly powerful AI models are gradually exhausting the easily accessible data resources on the internet and urgently need new data sources.
The design space is vast when it comes to how to use decentralized infrastructure to return data control to users, and innovative solutions are needed across multiple domains. Some of the most pressing challenges include:
• Data storage location and privacy protection (during storage, transmission, and computation)
• How to objectively evaluate, filter, and measure data quality
• Mechanisms for data attribution and monetization (especially in tracing value back to the source after inference)
• How to orchestrate or retrieve data in a diversified model ecosystem.
Regarding solving the data supply bottleneck, the key is not just to replicate existing data labeling platforms (like Scale AI) with tokens, but to understand how we can leverage technological advantages to build competitive solutions in terms of scale, quality, and incentive mechanisms to generate high-value data products. Particularly in a context where demand is primarily from Web2 AI, considering how to integrate smart contract execution mechanisms with traditional service level agreements (SLAs) and tools is an important area worth exploring.
— Danny

7. Decentralized Computing Power
If data is one of the fundamental building blocks for AI development and deployment, then computing power is another. Over the past few years, the old paradigm dominated by large data centers (such as exclusive access to specific locations, energy, and hardware) has largely defined the trajectory of deep learning and AI development. However, with the emergence of physical constraints and the advancement of open-source technologies, this dynamic is being challenged.
The translation is complete. I have translated the text according to the specified rules, retaining the content within the <> tags without translation.
Here is the English translation:The v1 version of decentralized AI computing power looks like a copy of the Web2 GPU cloud, without real advantages in supply (hardware or data centers) and lacking natural market demand. In the v2 version, some teams are developing technology stacks to build competitiveness through the orchestration, routing, and pricing of **heterogeneous high-performance computing (HPC) resources**, and introducing proprietary features to attract demand and combat profit compression, especially in inference tasks. Additionally, teams are starting to differentiate their competition strategies (GTM) around different application scenarios and markets, with some focusing on using compiler frameworks to improve inference routing efficiency across diverse hardware, while others are creating distributed model training frameworks on the computing power networks they have built.
We are even starting to see the emergence of an AI-Fi market, proposing new economic primitives to transform computing power and GPUs into revenue-generating assets, or leveraging on-chain liquidity to provide alternative funding sources for data centers to acquire hardware. A key question is, to what extent will decentralized AI (DeAI) depend on decentralized computing power for its development and deployment? Or will it be like the storage market, where the gap between the ideal and actual demand can never be fully bridged, ultimately failing to fully realize the potential of this concept?
— Danny
8. Computing Power Accounting Standards
A major challenge in coordinating the heterogeneous computing power to incentivize decentralized high-performance computing networks is the lack of a widely recognized computing power accounting standard. The unique output characteristics of AI models bring complexity to the high-performance computing market, such as different model variants, quantization techniques, and adjustable randomness through temperature and sampling hyperparameters. Furthermore, different AI hardware (e.g., GPU architectures and CUDA versions) can further lead to output variations. Ultimately, this requires establishing standards for how to account for the capabilities of models and computing power in heterogeneous distributed systems.
Due to the lack of standards, this year we have seen multiple cases in both the Web2 and Web3 domains where model and computing power markets were unable to accurately account for the quality and quantity of their computing power. This has forced users to run their own model benchmark tests, auditing performance results through comparisons, or even verifying actual performance through limiting the workload of computing power markets (Proof-of-Work).
Given the core principle of "verifiability" in the crypto domain, we hope that by 2025, the integration of crypto and AI will have an advantage over traditional AI in terms of verifiability. Specifically, ordinary users should be able to make equal comparisons of the outputs of models or computing clusters to audit and benchmark system performance.
— Aadharsh
9. Probabilistic Privacy Primitives
In "The Promise and Challenges of Crypto+AI Applications", Vitalik raised a unique challenge facing the integration of crypto and AI:
"In cryptography, open-source is the only way to achieve security, but in AI, open-sourcing models (and even training data) greatly increases the risk of adversarial machine learning attacks."
While privacy is not a new research area for blockchains, the rapid development of AI will further accelerate the research and application of privacy-enhancing technologies. This year, we have seen major advancements in privacy technologies, such as zero-knowledge proofs (ZK), fully homomorphic encryption (FHE), trusted execution environments (TEE), and multi-party secure computation (MPC), which can be used for private computation on encrypted data for general use cases. At the same time, we have also seen centralized AI giants like Nvidia and Apple using proprietary TEE technologies to enable federated learning and private AI inference within hardware, firmware, and model-consistent systems.
Given this, we will closely follow the developments in how to maintain privacy in probabilistic state transitions, and how these advancements can accelerate the deployment of decentralized AI applications in heterogeneous systems, including decentralized private inference, encrypted data storage and access pipelines, and fully autonomous execution environments.
— Aadharsh

10. Agent Intentions and the Next-Generation User Trading Interface
The application of AI agents in autonomous on-chain trading is one of the most promising use cases currently. However, over the past 12-16 months, there has been a lot of ambiguity in the definitions around concepts like "intentions", "agent behavior", "agent intentions", "solvers", and "agent solvers", especially in how to differentiate them from the recent development of traditional **"trading bots"**.
In the next 12 months, we look forward to seeing more advanced language systems combined with different data types and neural network architectures, driving progress in the overall design space.
• Will agents use the current on-chain systems for trading, or will they develop their own tools/methods?
• Will large language models (LLMs) continue to serve as the backend for these agent trading systems, or will completely different systems emerge?
• At the user interface level, will users start to use natural language for trading?
• Will the long-standing assumption of "the wallet is the browser" ultimately be realized?
These are the key areas we will be focusing on.
— Danny, Katie, Aadharsh, Dmitriy





