An early exploration into the current landscape of DeFai (DeFi x Ai).

1/ Introduction:
In just 3 months, the AI x memecoin market has reached a market cap of $13.4B , rivaling established L1 blockchains like AVAX and SUI.
In fact, AI have a long history with blockchain, from the early days of decentralized model training on Bittensor subnets, to decentralized GPU / computational resources marketplaces like Akash and io.net, to the current wave of AI x memecoins and frameworks on Solana. Each phase has shown that crypto, to some degree, can complement AI with resource aggregation, enabling sovereign AI and consumer use cases.
The first wave of Solana AI tokens introduced meaningful utility beyond speculation. Notable examples include ai16z’s ELIZA framework, Virtual’s aixbt AI agents for market analysis and content creation, and various toolkits integrating AI with blockchain capabilities.
The second AI wave unfolds with mature tools in place, real applications and implementation have become the primary value drivers, with DeFi emerging as the ideal testing ground for these innovations.
According to CoinGecko, the mcap of DeFai stands at ~$1B . Griffian dominates the market with a 45% share, while $ANON holds 22%. The sector began experiencing rapid growth after Dec 25, when frameworks and platforms like Virtual and ai16z gained momentum following the return of “US money” after the Christmas break.

This is merely the beginning. DeFai’s potential reaches far beyond its current state. While its integration remains in the proof-of-concept phase, we shouldn’t underestimate its capacity to revolutionize DeFi into a more intelligent, user-friendly, and efficient financial ecosystem through AI capabilities.
Before exploring DeFai’s landscape, it’s essential to understand the fundamental mechanics of how agents operate within DeFi and blockchain environments.

2/ How agents work in DeFi?
AI agents are programs that execute tasks on behalf of users following specific workflows. At their core, these agents are powered by LLM that generate responses based on their training data.
These agents enhance user experience through memory retention, storing past interactions to learn from user behavior patterns. This capability enables them to adapt their responses and generate personalized recommendations and strategies based on historical context.
In blockchain, agents can interact with smart contracts and accounts to handle complex tasks without constant human intervention. For example, simplify defi UX by executing multi-step bridging and farming in one click, optimizing yield farming strategies for better returns, executing trade (buying/selling) and performing market analysis, all autonomously.
Taking reference from @threesigmaxyz research, most of the model follow 6 specific workflow:
- Data collections
- Model inferences
- Decision making
- Hosting and operations
- Interprobility
- Wallet
- Data collections:
First, the models need to understand the operation environment they need to work with.
Therefore they need multiple data streams to keep the model up-to-date with the market condition. That includes 1) Onchain data from indexers and oracles 2) Offchain data via API with data from price platform CMC / Coingecko / other data providers.
- Model inferences:

Once a model learns the environment, they need to apply the knowledge to make prediction or execution based on the new, unseen input data for the users. Models that agents use included:
1) Supervised and Unsupervised Learning: Models trained on labeled or unlabeled data to predict outcomes. In blockchain contexts, these can analyze governance forum data to predict voting results or identify transaction patterns.
2) Reinforcement Learning: Models that learn through trial-and-error by evaluating the rewards and consequences of their actions. Applications include optimizing token trading strategies, such as determining optimal entry points for token purchases or adjusting yield farming parameters.
3) Natural Language Processing (NLP): Technique that understands and processes human language input. It’s valuable for scanning governance forums and proposals for insight or summary.
- Decision making
With trained models and data, agents take action through their decision-making capabilities. This involves interpreting the current situations and responding appropriately.
During this phase, the optimization engine plays an important part for finding the best possible result. For example, agents need to balance multiple factors such as slippage, price difference, transaction costs, and potential profits before executing a yield strategy.
Since a single agent may not be optimized for decision making across different domains, multi-agent systems can be deployed to coordinate actions.
- Hosting and operations
AI agents usually host their models off-chain due to the computationally intensive nature of the tasks. Some rely on centralized cloud services like AWS, while those prefer decentralization use distributed computing networks like Akash or Ionet, alongside Arweave for data storage.
Although the AI models operate off-chain, agents need to interact with on-chain protocols to execute smart contract functions and manage assets. This interaction requires secure key management solutions like MPC wallets or smart contract wallets to handle transactions safely. Agents can operate through APIs to communicate and engage with their communities across social platforms such as Twitter and Telegram.
- Interoperability
Agents need to interact with various protocols while staying updated across different systems. They commonly use API bridges to fetch external data, such as price feeds.
In order to keep up-2-date with the current protocol states and enable appropriate responses, agents rely on real-time synchronization through webhooks or decentralized messaging protocols like IPFS.
- Wallet:
Agents need a wallet or access to a private key to initiate blockchain transactions, there are 2 common types of wallet / key management on the market.
- MPC-based
- TEE-based
For portfolio management applications, MPC or TSS can split keys between agents, users, and trusted parties, and users can still maintain a certain level of control over the AI. Coinbase AI Replit wallet demonstrates this approach effectively, showing how to implement MPC wallets with AI agents.
For fully autonomous AI systems, TEE offers an alternative where private keys are stored within a secure enclave, allowing the entire AI agent to operate in a concealed and protected environment safe from 3rd party interference. However, TEE solutions currently face 2 main challenges: Hardware centralization and performance overhead.
Once you collected the 6 stones of AI, you can creare an autonomous agent on blockchain. Now, different agents can each play a role in the defi ecosystem to improve efficiency and trading experience on-chain.

3/ DeFai Ecosystem Mapping
DeFai x Ai have 4 main categories.
3.1 Abstraction / UX friendly AI
3.2 Yield optimisation or portfolio management
3.3 DeFai infrastructure or platform.
3.4 Market analysis or prediction bot

3.1 Abstraction AI / UX friendly AI
The core purpose of AI is to enhance efficiency, solve complex problems, and simplify complicated tasks for users. In DeFi, abstraction-based AI serves to reduce the complexity barrier, making defi more accessible to both newcomers and experienced traders.
In blockchain, an effective AI solution should be able to:
- Automatically executes multi-step trading and staking operations, this requires no prior industry knowledge from users.
- Performs real-time research and delivers all necessary info and data that users need to make informed trading decisions.
- Fetches data from various platforms to identify market opportunities and provides comprehensive analysis for users.
Most of these abstraction tools are powered by ChatGPT at their core.
While these models need to integrate seamlessly with blockchain, It seems to me that no models appear to be specifically trained or fine-tuned with blockchain data.
- Griffain:

Tony, the founder of griffain, developed the concept during a Solana hackathon. He later transformed this idea into a functional product that gained support and endorsement from Solana’s founder, Anatoly.
In simple words, griffain is the first, and most performant abstraction Ai on solana right now that can execute swap , wallet management, NFT minting, and token sniping and many more.
To be specific, here are the functions that griffain offer:
- Execute trade with natural language
- Agents can tweet on behalf of users
- Multi-agent coordinations
- Launch token with pumpfun, mint NFT and you can select address to airdrop.
- Sniping new launch memecoins on pumpfun based on certain keywords or condition Staking, automations and execute defi strategy
- Fetch data from platforms for market analysis, such as identifying the top holder of a token.
- Scheduling tasks, users can input memory into agents to build tailor agents.
Although griffain offers numerous features, users still need to manually input token addresses or provide specific instructions to agents for execution. As a result, the current product is not yet fully optimized for beginners who may be unfamiliar with these technical requirements.
So far, griffain offers 2 types of AI agents: personal AI and special agents.
- Personal AI agents are controlled by users. Users can customize instructions and input memory settings to tailor the agent for their preferred usecases.
- Special agents are agents designed for specific tasks. For example, the Airdrop Agent is trained to find addresses and distribute tokens to designated holders, while the Staking Agent is programmed to stake SOL or other assets in pools for yield farming purposes.
A notable feature is griffain’s multi-agent collaboration system, where multiple agents can work together in a single chatroom. These agents are capable of solving complex tasks independently while maintaining coordinated efforts.

Wallet:
Upon account creation, the system generates a wallet with privy where users can delegate the account to agents to execute transactions and manage the portfolio autonomously.
In which, keys are split by Shamir’s secret sharing so that neither griffain nor privy can hold custody of the wallet, and according to slate, SSS works by splitting the key into 3 parts, which include:
1. Device Share: Store in your browser and retrieve it when the tab is open.
2. Auth Share: Stored on Privy’s server and is retrieved when you authenticate and log into the applications.
3. Recovery Share: Stored encrypted on Privy’s server and is decrypted and only gets it when a user types in the password to log into the tab.
In addition, there are options for users to export or export in the frontend of griffain.
Anon:

Anon is created by Daniele Sesta, known for creating the defi protocol wonderland and MIM (Magic Internet Money). Similar to Griffain, anon is made to simplify defi interactions for both the newcomers and veterans.
While the team has outlined potential features, none have been verified yet as the product hasn’t been publicly accessible yet.
Some features included:
- Execute trade with natural language (International language including Chinese)
- Cross chain bridging enabled by LayerZero
- Borrowing and supplying with partnered protocol such as Aave, Sparks, Sky & Wagmi
- Getting real time price & data feeds via Pyth
- Offering time-gas-price base automation & triggers
- Providing real time market Insights such as sentiment check, social profile analysis etc.
Other than that, daniele published 2 majors updates about Anon recently:
- Automated Framework:
A typeScript framework that helps more projects to integrate with Anon more quickly. This framework will require all data and interactions to follow a pre-defined structure so that Anon can lower the risk of AI being hallucinated and more reliable.
- Gemma:
A research focus agents that can collect real time data from both onchain defi metrics (such as TVL, volumes, prepdex funding rate) and off-chain data such as twitter and telegram for social sentiment analysis. These data will be transformed into opportunity Alerts and tailored insight for the users.
This makes Anon one of the most anticipated and powerful abstraction tools across the landscape judging from the doc, especially during this market.
Slate (No Token Yet):

Backed by BigBrain Holdings, Slate positions itself as an “Alpha AI” that autonomously trades based on on-chain signals. Currently, slate is the only abstraction AI capable of automating and executing trades on Hyperliquid.
Slate prioritizes optimized price routing, fast execution, and can run simulations before trades.
Key features include:
- Cross-chain swaps between EVM chains and Solana
- Automated trading based on price, market cap, gas fees, and profit/loss metrics
- Natural language task scheduling
- Onchain trading aggregations
- Telegram notification system
- Capable of opening long + short, repaying under certain conditions, LP management + yield farming, Including action on hyperliquid.
Fee structure:
Overall, there are 2 fee categories:
General actions:
Slate does not charge a fee for regular transfer / withdrawal but charges a 0.35% fee for swap, bridge, claim, borrow, lend, repay, stake, unstake, long, short, lock, unlock etc.
Conditional actions:
If you set a condition order (E.g., limited order), slate charges 0.25% gas conditions or 1.00% for all other conditions.

Wallet:
Slate integrates Privy’s embedded wallet architecture, making sure that neither Slate nor Privy maintains custody of your wallet. Users can either connect their existing wallet and grant the agent the authority to execute trades on their behalf.
Abstraction AI comparison:
A comparison of some of the most popular abstractions on the market:

In comparison, most AI abstraction tools today support bridging and cross-chain swaps between Solana and EVM chains. Slate offers Hyperliquid integration while Neur and Griffin are currently available only on Solana, though I believe they have plan to add cross-chain support soon.
The majority of platforms integrate both Privy embedded wallets and EOA wallets, allowing users to maintain custody of their funds. but requiring users to delegate access to agents in order to execute certain trades. This is an area where TEE can play a part to make sure that the AI is tamperproof .
While most AI abstraction tools share common features like token launches, trade execution, and conditional orders with natural languages, their performance varies significantly.
In terms of products, we are still at the very early stage of abstraction AI.
The comparative analysis of these 5 projects reveals distinct strengths across platforms. Griffain distinguishes itself through its comprehensive feature set, extensive partnership network, and advanced multi-agent coordination capabilities for workflow management (with Orbit being the only other platform offering multi-agent functionality).
Anon’s strengths lie in its rapid response times, multi-language support, and Telegram integration, while Slate has carved out its niche through a automation platform and exclusive support for Hyperliquid.
However, many of these abstraction AI platforms still struggle with fundamental challenges, particularly in processing basic swap operations (such as USDC pairs).
Common issues include difficulties in accurately identifying token addresses, retrieving current prices, and conducting up-to-date market trend analysis. The key differentiating factors among these platforms ultimately come down to response time, accuracy, and answer relevance. Looking ahead, there’s a clear need for a comprehensive dashboard to maintain transparency and enable performance comparison across all abstraction AI platforms.
3.2 Autonomous Yield optimisation and Portfolio Management:
Unlike traditional yield strategies, protocols in this sector use AI to analyze onchain data for trend analysis, then provide insights that help the teams develop better yield optimization and portfolio allocation strategies.
It is common to see that models are trained on networks like Bittensor subnets or off-chain for cost effectiveness. For AI to execute trades autonomously, verification methods such as ZKP are implemented to ensure models remain honest and verifiable. Below are a few examples of yield optimization Defai protocol.
T3AI:
T3AI is a lending protocol that supports under-collateralized loans by using AI as an intermediary and risk engine. The protocol’s AI agent continuously monitors loan health in real-time and is capable of ensuring the loan to remain repayable through T3AI’s risk metric framework.
On the other hand, AI also enables precise risk forecasting by analysing how different assets relate to each other and how their prices change over time. AI in T3AI does this by:
- Looking at the price data from major CEX and DEX.
- Measuring how volatile different assets are
- Study the relationship and how different assets’ prices move together
- Using AI to find hidden patterns in how multiple assets interact with each other.
Which will then suggest optimal allocation strategies based on users’ portfolios, and potentially enabling autonomous AI portfolio management in the future after the model is fine-tuned . ZK proofs and a validator network are also implemented to ensure all actions are verifiable and reliable. Below is a workflow of how the AI is being used and verified in T3AI.

Kudai:

Kudai is an experimental, GMX ecosystem-focused agent launched by GMX Blueberry Club using the EmpyrealSDK toolkit. The token is currently trading on Base.
Kudai’s concept is to channel all trading fees earned from $KUDAI into funding agents for autonomous trading operations, then distribute profits back to token holders.
In the upcoming Phase 2/4, Kudai will be able to interpret natural languages commend on Twitter to:
- Buy and stake $GMX to generate new revenue streams
- Invest in GMX GM pools to enhance earnings further
- Sweep GBC NFTs at floor prices to expand its portfolio
After this phase, Kudai will become fully autonomous, executing orders independently for leverage trading, arbitrage, and yield farming.
However, the team hasn’t disclosed any additional information, and Kudai AI remains an experimental product at this stage.
Sturdy Finance V2:
Sturdy Finance is a lending and yield farming aggregator that leverages AI models (trained by Bittensor SN10 subnet miners) to optimize yields by moving funds between different whitelisted silo pools.
Specifically, Sturdy runs on a 2 tier architecture consisting of silo pools and aggregator layer.
1. Silo pools are isolated single asset pools where users can only lend one asset or borrow with one collateral.
2. Aggregator layers are built on top of Yearn V3, where users’ assets are disturbed to whitelisted silo pools according to the utilisation rate and yield. Bittensor subnet provides the aggregator the best allocations strategy.
When users lend to the aggregator, they maintain exposure only to their selected collateral types, eliminating any risk from other lending pools or collateral assets.

As of this writing, Sturdy V2’s TVL has been declining since May, with the aggregator’s total TVL at approximately $3.9M, representing about 29% of the protocol’s total TVL.
DAU of sturdy have remained 2 digits (>100) since sep 2024, with pxETH and crvUSD being the primary lending assets in the aggregator. The protocol’s performance has clearly stagnated over the past few months, and the integration of AI seems like a move to hopefully regaining momentum for its protocol.

3.3 Market Analysis agent:
Aixbt is a market sentiment tracking agent that aggregates and analyzes data from over 400 KOLs on Twitter. Using its proprietary engine, aixbt can identifies real-time trends and publishes insights around the clock.
Among all AI agents in the space, AixBT commands a significant 14.76% mindshare, making it one of the most influential agents in the ecosystem.

Aixbt is clearly created for social media purposes for social interaction and the insights he published directly reflects the attention of the market.
His capabilities is beyond just delivering alpha — he is interactive, capable of replying to user questions, and even token launches through twitter using specialized toolkits.For example, the $CHAOS token was created through collaboration between AixBT and another interactive bot called Simi using the @EmpyrealSDK toolkit.
As of now, holders of 600,000 $AIXBT tokens (worth ~ $300,000) can access its analytics platform and terminal.
3.4 Defai Infrastructure and platform:
Web3 AI agents would not be possible without a decentralized infrastructure. These projects provide not only model training and inference but also data, verification methods, and a coordination layer for AI agents to develop on.
Regardless of Web2 or Web3 AI, model, compute, and data. They are the 3 ultimate bedrocks for driving LLM and AI agent excellence. Open source model development trained in a decentralized manner will be appreciated by agent builders as it completely removes single party risk of centralized ownership and opens up the possibility for user owned AI as developers no longer need to rely on LLM APIs owned by Web2 AI giants such as Google, Meta, and OpenAI.
Below is a good AI infrastructure map created by pinkbrains.

Model Creation:
Pioneers like Nous Research, Prime Intellect and Exo Labs are pushing the boundaries of decentralized training.
Nouse Research’s Distro training algorithm and Prime Intellect’s DiLoco algorithm have successfully trained models with over 10 billion parameters in low-bandwidth environments, demonstrating that large-scale training is achievable outside conventional, centralized systems. Exo Labs advanced this further with the launch of SPARTA, a distributed AI training algorithm that reduces inter-GPU communication by more than 1,000 times.
Bagel is trying to become a decentralized HuggingFace to provide models and data for AI developers while addressing the open source data attribution and monetization problems through the use of cryptography. Bittensor creates a competitive marketplace for participants to contribute compute, data and intelligence to accelerate AI model and agent developments.
Data & Compute Provider:
Many believe that Axibt has emerged as the clear leader in the utility agents category because of its access to high-quality datasets. Providers like Grass, Vana, Sahara, Space and Time, and Cookie DAOs supply high quality data, domain-specific data or grant AI developers access to data within walled gardens, enhancing their capabilities. Leveraging 2.5+ million nodes, Grass has been able to scrape an astonishing 300+ TB per day.
While Nvidia right now has only managed to train their video model on 20 million hours of video data, Grass’s video dataset is 15x bigger (300 million hours) and is growing by 4 million hours a day — that’s 20% of Nvidia’s entire dataset being collected by Grass, daily. In other words, grass is retrieving the equivalent of Nvidia total video dataset in 5 days.
Without compute, agents simply can’t run. Compute providers such as Aethir and io.net aggregate various GPUs to provide cost-effective options to agent developers. Hyperbolic’s decentralized GPU Marketplace is slashing compute costs by up to 75% compared to traditional centralized providers, while hosting open source AI models to offer low latency inference with throughput comparable to web2 cloud provider.
Hyperbolic enhances its GPU marketplace and cloud service by introducing AgentKit, a powerful interface that provides AI agents full access to Hyperbolic’s decentralized GPU network. It features an AI-readable map of available computational resources, allowing agents to scan in real-time and gain insights into not only resource availability but also specifications, current load, and performance details.
This AgentKit unlocks a revolutionary future where agents can independently source its own compute and pay for its compute cost.
Verification:
Through its innovative Proof of Sample verification mechanism, Hyperbolic ensures every inference interaction in this ecosystem is verified, establishing trust for the agentic future.
However, verification only addresses part of the trust issue for autonomous agents. The other aspect of trust involves privacy protection, where TEE infrastructure projects like Phala, Automata and Marlin can play a role. For example, proprietary data or models used by these AI agents can be kept secure. In fact, true autonomous agents cannot operate fully without a TEE, as it is crucial to protect sensitive information, such as safeguarding the wallet’s private key and preventing unauthorized access to its private key or Twitter account login credentials.
How does TEE work?
TEE isolates sensitive data within a protected CPU/GPU enclave during processing. The contents of the enclave are only visible to authorized program code. Cloud service providers, developers, administrators, and other parts of the hardware cannot access this part of the hardware.
The primary use case for TEE has used to be the execution of smart contracts, particularly in Defi protocols with more sensitive financial data. So the integration of TEE in Defai includes traditional DeFi scenarios such as:
1.Transaction Privacy, TEE can hide transaction details, such as sender and receiver addresses and transaction amounts. Platforms like Secret Network and Oasis use TEE to protect transaction privacy in Defi apps, enabling private payments in Defi.
2.Anti-MEV, by executing smart contracts within a TEE, block builders cannot access transaction information, preventing front-running attacks which creates MEV. Flashbots leveraged TEE to develop BuilderNet, a decentralized block-building network that reduces the censorship risks associated with centralized block builders. Chains like Unichain and Taiko also use TEE to provide better user trading experience for users.
These features also work with alternative solutions like ZKP or MPC. However TEE currently has the best efficiency among these three solutions on executing smart contracts simply because the model is based on hardwares.
On the agent side, TEE offers agents with abilities:
1.Autonomous — TEE can create independent operating environments for agents, ensuring that their strategies are executed without interference from humans. This guarantees that investment decisions are based solely on the agent’s independent logic.
TEE can also allow agents to control social media accounts, ensuring that any public statements they make are independent and free from external influence, thus avoiding suspicions of promotional bias such as advertising. Phala is collaborating with AI16z team to enable Eliza to run efficiently in TEE environments.
2.Verifiability, the last thing TEE can provide for agents is verifiability. People can verify whether agents are using the promised models for computation and producing valid results. Automata and Brevis are working together to develop this capability.
AI Agent Collective:
As more and more specialized agents with specific use cases (DeFi, Gaming, Investment, Music, etc.) enter the space, better agents collaboration and seamless communication become essential.
The infrastructure for “agent swarms” framework has emerged to address the limitations of monolithic agents. Swarm intelligence allows agents to work together as a team, pooling their capabilities to achieve shared goals. Coordination layers abstract away the complexity, making it easier for agents to collaborate with shared objectives and incentives.
Several Web3 players, including Theoriq, FXN, and Questflow, are progressing in this direction. Of all these players, Theoriq, which initially launched as ChainML in 2022, has been working towards this goal the longest, with the vision of becoming the universal base layer for Agentic AI.
To meet this vision, Theoriq handles agent registrations, payments, security, routing, planning and governance at the base layer modules. It also connects the supply and demand sides, offering an intuitive agent-building platform called Infinity Studio, which allows anyone to deploy their own agents, along with Infinity Hub, a marketplace where customers can browse all available agents. In its swarm system, a meta-agent selects the most suitable agents for a given task, creating “swarms” to work towards shared objectives, while tracking reputation and contributions to maintain quality and accountability.
Theoriq token provides economic security, used by agent operators and community members to to stake tokens on agents to signal quality and trust to incentivize high-quality service and discourage malicious behavior. Token also serves as a medium of exchange to pay for services and access data and reward participants that contribute data, models, etc.

As the narrative around AI agents becomes more established as a long term industry sector with clear utility agents taking the lead, we could see a revival in Crypto x AI infrastructure projects, delivering strong price performance. These projects have the potential to leverage their VC funding, years of R&D, and domain-specific technical expertise to expand across the value chain. This could allow them to develop their own advanced utility AI agents, capable of outperforming 95% of other agents currently in the market.
4. The Evolution of DeFai, and the Next Step for DeFai.
I always believe the market will evolve in 3 stages: first demanding efficiency, then decentralization, and finally privacy.
There will be 4 phases of defai.
Phase 1 of DeFi AI will focus on efficiency, with tools that improve user experience for complex DeFi tasks without requiring deep protocol knowledge. Examples include:
- AI that understands user prompts even with imperfect formatting
- Fast swap execution within minimal block times
- Real-time market research helping users make profitable decisions based on their goals
If innovation is realised, they can save time and energy while lowering barriers to on-chain trading, potentially creating a “Phantom” moment in the coming months.
In Phase 2, agents will trade autonomously with minimal human intervention. Trading agents that can execute strategies based on third-party insights or other agents’ data will create a new DeFi paradigm. Professional or sophisticated defi users can fine-tune their models and build agents to generate optimal yields for themselves or their clients which requires less manual monitorisation.
In Phase 3, users will begin to focus on wallet management concerns and AI verification, as users demand transparency. Solutions like TEE and ZKP will ensure AI systems are tamper-proof, protected from third-party interference, and verifiable.
Finally, once these phases are complete, a no-code DeFi AI engineering toolkit or AI-as-a-service protocol could create an agent-based economy where fine-tuned models are traded using crypto.
While this vision is ambitious and exciting, several bottlenecks remain unsolved:
- Most current tools are simply ChatGPT wrappers, with no clear benchmarks to identify legitimate projects
- On-chain data fragmentation pushes AI models toward centralization rather than decentralization, and it’s unclear how onchain agents will address this issue these days.

DeFi + Ai = DeFai was originally published in IOSG Ventures on Medium, where people are continuing the conversation by highlighting and responding to this story.