Author: @knimkar
Translation: Plain Language Blockchain
We seem to be entering the Cambrian explosion stage of use case experimentation at the intersection of AI and the crypto domain. I'm very excited about the outcomes emerging from this wellspring of energy, and I'd like to share some of the exciting new opportunities we're seeing in the @SolanaFndn ecosystem.1. High-Level Overview
1) Enabling the most vibrant agent-driven economy: Truth Terminal's first demonstration of what AI agents can achieve when able to interact on-chain was truly mind-bending. We look forward to seeing the experiments that safely push the boundaries of agent capabilities on-chain. The potential in this space is immense, and we've barely begun to explore the design space. This has proven to be the most unexpected and explosive intersection of crypto and AI, and it's only just the beginning. 2) Empowering Solana developers with large language models (LLMs): LLMs have already shown impressive capabilities in writing code, and they're only going to get stronger. We hope to leverage these capabilities to boost Solana developers' productivity 2-10x. In the near term, we'll be creating high-quality benchmarks to measure LLMs' understanding and coding of Solana (more on this below), which will help us understand the potential impact of LLMs on the Solana ecosystem. We look forward to supporting teams that make high-quality progress in fine-tuning models (which we'll validate through their stellar performance on these benchmarks!). 3) Supporting an open and decentralized AI technology stack: By "open and decentralized AI technology stack," we mean open and decentralized protocols that enable access to the following resources: training data, compute resources (for training and inference), model weights, and the ability to verify model outputs ("verifiable computation"). This open AI technology stack is crucial because it: - Accelerates experimentation and innovation in the model development process - Provides an alternative for those who may be forced to use untrustworthy AI (e.g., state-sanctioned AI) We hope to support teams and products building at all layers of this technology stack. If you're working on anything related to these focus areas, feel free to reach out to the author!2. Detailed Overview
Now, let's dive deeper into why we're excited about these three pillars and what we hope to see built. 1) Enabling the Most Vibrant Agent-Driven Economy Why are we focused on this? The discussion around Truth Terminal and GOAT has been extensive, and I won't rehash it here, but suffice it to say that the sheer craziness of what AI agents can achieve when interacting on-chain has irreversibly entered reality (and this is with agents not even directly taking actions on-chain yet).




There is not enough high-quality original data for LLM training;
Lack of enough verified build versions;
Lack of enough high-value information exchange on places like Stack Overflow;
Solana infrastructure is developing rapidly, which means that even code written 6 months ago may not fully meet current needs;
There is no way to assess the model's understanding of Solana.
What we hope to see
Help us publish better Solana data on the internet!
More teams releasing verified build versions.
Hope more people in the ecosystem can actively participate in Stack Exchange, ask good questions and provide high-quality answers;
Create high-quality benchmarks to assess LLM's understanding of Solana (RFP to be released soon);
Create fine-tuned versions of LLM that score highly on the above benchmarks, more importantly, accelerate the work of Solana developers. Once we have high-quality benchmarks, we may provide rewards for the first model to reach the benchmark score - stay tuned.
The ultimate achievement here would be high-quality, differentiated Solana validator client software entirely created by AI.
3) Support an open and decentralized AI technology stack
Why do we focus on this? It is currently unclear how power in the AI field will balance between open-source and closed-source AI in the long run. There are good arguments for why closed-source entities will maintain technological leadership and capture most of the value from foundational models. For now, the simplest expectation is that the status quo will continue - large companies like OpenAI and Anthropic drive the technological frontier, while open-source models quickly catch up and ultimately have unique powerful fine-tuned versions for certain use cases. We hope Solana can closely interface with and support the open-source AI ecosystem. Specifically, this means facilitating access to: data for training, compute power for training and inference, model weights, and the ability to validate model outputs. We believe there are important concrete reasons for this:
A, Open-source models help accelerate model development debugging and innovation The open-source community has shown how it can quickly refine and fine-tune open-source models like Llama, demonstrating how it can effectively complement the efforts of large AI companies in pushing the frontiers of AI capabilities (even Google researchers pointed out last year that on open-source "we don't have a moat, and neither does OpenAI"). We believe a thriving open-source AI technology stack is crucial to accelerating the pace of progress in this field.
B, Provide an outlet for those who may be forced to use AI they don't trust (e.g. state-sanctioned AI) AI is now perhaps the most powerful tool in the arsenal of dictators or authoritarian regimes. State-sanctioned models provide an officially sanctioned version of the truth and become a huge means of control. Highly authoritarian regimes may also have better models because they are willing to disregard citizen privacy to train their AI. The problem with AI being used as a tool of control is not whether it will happen, but when, and we hope to support an open-source AI technology stack as much as possible to prepare for this possibility.
Solana has already become a home for many projects supporting the open-source AI technology stack:
Grass and Synesis One are promoting data collection;
@kuzco_xyz, @rendernetwork, @ionet, @theblessnetwork, @nosana_ai and others are providing vast amounts of decentralized computing resources.
Teams like @NousResearch and @PrimeIntellect are working on developing frameworks to make decentralized training possible (see below).
What we hope to see is more product development across the various layers of the open-source AI technology stack:
Decentralized data collection, such as @getgrass_io, @usedatahive, @synesis_one
On-chain identity authentication: including protocols that allow wallets to prove they are human identities, and protocols to verify LLM API responses so consumers can confirm they are interacting with LLM
Decentralized training: such as @exolabs, @NousResearch and @PrimeIntellect
Intellectual property infrastructure: enabling AI to license (and pay for) the content it utilizes