Written by: Austbot
Compiled by: TechFlow
Anagram Build spends most of its time researching novel crypto use cases and applying them to specific products. One of our most recent research projects has entered the field of Verifiable Computation (VC). Our team has used this research to create a new open source system called Bonsol. We chose this research area because Verifiable Computation brings many valid use cases, and various L1s are working together to optimize the cost-effectiveness and scalability of Verifiable Computation.
In this paper, we have two goals
First, we want to ensure you have a better understanding of VC as a concept and the products it may enable in the Solana ecosystem.
Secondly, we would like to introduce you to our latest creation: Bonsol
What is Verifiable Computation?
The term “Verifiable Compute” might not appear in a bull market startup prospectus, but the term “zero knowledge” will. So, what do these terms mean?
Verifiable computation ( VC ) is running a specific workload in a way that produces proof of its work, which can be publicly verified without re-running the computation . Zero-knowledge (ZK) is the ability to prove statements about data or computations without revealing all of the data or computation's inputs. In the real world, these terms are somewhat mixed together, and ZK is somewhat of a misnomer. It has more to do with choosing what information needs to be disclosed to prove a statement. VC is a more accurate term, and is the overall goal of many existing distributed system architectures.
How can VCs help us build better crypto products?
So why would we want to add VC or ZK systems to platforms like Solana and Ethereum? The answer seems to have more to do with developer security. The developer of the system acts as an intermediary between the user's trust in the black box and the technical capabilities that make that trust objectively valid. By leveraging ZK/VC technology, developers can reduce the attack surface in the products they are building. VC systems shift the focus of trust to the proof system and the computational workload being proven. This is similar to the inversion of trust that occurred when moving from the typical web2 client/server approach to the web3 blockchain approach. Trust shifts from relying on the company's promise to trusting the open source code and the network's cryptographic systems. From a user's perspective, there is no true zero-trust system, and I think it all feels like a black box to the user.
For example, by using a ZK login system, developers have less responsibility for maintaining secure databases and infrastructure, since the system only needs to verify that certain cryptographic properties have been achieved. VC technology is being applied in many places where consensus is needed to ensure that the only condition for the required consensus is that the math is valid.
While there are many successful use cases of VC and ZK in the wild, many of them currently rely on development progress at various layers of the crypto software stack to make them fast and efficient enough for production use.
As part of the work we do at Anagram, we have the opportunity to speak with a wide range of crypto founders/developers to understand how the current state of the crypto software stack is impacting product innovation. Historically, our conversations have helped us identify an interesting trend. Specifically, a group of projects are actively moving on-chain product logic off-chain because it is becoming too expensive or they need to add more exotic business logic. As a result, ultimately these developers find themselves trying to find systems and tooling to balance the on-chain and off-chain parts of the products they are developing, which are becoming increasingly powerful. This is where VC becomes a critical part of the path to help connect the on-chain and off-chain worlds using trustless and verifiable methods.
How does the current VC/ZK system work?
Today, VC and ZK functions are primarily executed on alternative computation layers (such as Rollups, sidechains, relays, oracles, or coprocessors) and are available through smart contract runtime callbacks. To enable this workflow, many L1 chains are working to provide shortcuts outside of the smart contract runtime (such as system calls or precompiles) to perform some operations that would otherwise be too expensive on-chain.
There are several common patterns for current VC systems. I will mention the first four I know of. With the exception of the last case, ZK proofs are performed off-chain, but it is when and where the proofs are verified that gives each of these patterns its own advantages.
Fully verified on-chain
For VC and ZK proof systems that are able to generate small proofs, such as Groth16 or some Plonk variants, the proof is then submitted on-chain and verified on-chain using previously deployed code. Such systems are now very common, and the best way to try this approach is to use Circom and a Groth16 validator on Solana or the EVM. The downside is that these proof systems are quite slow. They also generally require learning a new language. Verifying a 256-bit hash in Circom actually requires manually processing each 256 bit. While there are many libraries that allow you to just call hash functions, behind the scenes you need to reimplement these functions in your Circom code. These systems are great when the ZK and VC elements in your use case are small, and when you need to prove that the proof is valid before taking some other deterministic action. Bonsol currently falls into this first category.
Off-chain verification
The proof is submitted on-chain so that all parties can see it and can be verified later using off-chain computation. In this mode, you can support any proof system, but since the proofs are not made on-chain, you don't get the same finality for any operations that depend on the proof being submitted. This is great for systems that have some kind of challenge window where parties can "veto" and try to prove that the proof is incorrect.
Verify the network
The proof is submitted to the validation network, and the validation network acts as an oracle to call the smart contract. You get finality, but you also need to trust the validation network.
Synchronous on-chain verification
The fourth and final mode is quite different; in this case, the proof and verification are done on-chain at the same time. This is where L1 or smart contracts on L2 are actually able to run ZK schemes on user input and allow proof of execution on private data. There aren't a lot of extensive examples out there, and generally what you can do with this approach is limited to more basic math operations.
Summarize
All four of these patterns are being tested in various chain ecosystems, and we’ll see if new ones emerge and which one becomes dominant. On Solana, for example, there isn’t a clear winner, and it’s still early days for the VC and ZK landscapes. Across many chains, including Solana, the most popular approach is the first pattern. Validation entirely on-chain is the gold standard, but as discussed, it also comes with some drawbacks. Mainly latency and the fact that it limits what your circuit can do. When we dive deeper into Bonsol, you’ll see that it follows the first pattern, but with some differences.
Introducing Bonsol
Bonsol, a new Solana-native VC system built and open sourced by us at Anagram. Bonsol allows developers to create a verifiable executable involving private and public data and integrate the results into Solana smart contracts. Please note that this project relies on the popular RISC0 toolchain.
This project was inspired by a question we hear from many projects we talk to each week: “How can I use private data to prove it on-chain?” While the “something” is different in each case, the underlying desire is the same: to reduce their centralized dependencies.
Before we dive into the details of the system, let’s illustrate the power of Bonsol with two different use cases.
Scenario 1
A Dapp allows users to buy lottery tickets in various pools of tokens. These pools are "tilted" daily from a global pool, so that the amounts in the pool (the amounts of each token) are hidden. Users can buy access to increasingly specific ranges of tokens in the pool. But there's a catch: once a user buys into that range, it becomes public to all users at once. The user then has to decide whether to buy the lottery ticket. They can decide it's not worth buying, or they can ensure they have a stake in the pool by buying the lottery ticket.
Bonsol comes into play when a pool is created and when a user pays for a range. When a pool is created/tilted, the ZK program receives private inputs for the amount of each token. The type of token is a known input, and the pool address is a known input. The proof is a proof of the random selection from the global pool into the current pool. The proof also contains commitments to the balances. The on-chain contract will receive this proof, validate it, and save these commitments so that when the pool is eventually closed and the balances are sent from the global pool to the lottery owners, they can verify that the number of tokens has changed since the random selection at the beginning of the pool.
When a user buys an "open" of a hidden token balance range, the ZK program takes the actual token balances as private input and generates a range of numerical values to be committed to along with the proof. The public input to this ZK program is the previously committed pool creation proof and its output. In this way, the entire system is verified. The previous proof must be verified in the range proof, and the token balance must hash to the same value as committed in the first proof. The range proof is also submitted on-chain and, as previously described, makes the ranges visible to all participants.
While there are many ways to implement this raffle ticket-like system, the properties of Bonsol make the trust requirement for the organization hosting the raffle very low. It also highlights the interoperability of Solana and VC systems. Solana programs (smart contracts) play a key role in facilitating trust as it verifies the proof and then allows the program to take the next action.
Scenario 2
Bonsol allows developers to create toolkits for other systems to use. Bonsol includes the concept of deployments, where developers can create some ZK programs and deploy them to Bonsol operators. Bonsol network operators currently have some basic ways to evaluate whether it is economically beneficial to request the execution of a ZK program. They can see some basic information about how much computation the ZK program will require, the input size, and the tip provided by the requester. Developers can deploy a toolkit that they think many other Dapps will want to use.
In the configuration of a ZK program, developers specify the order and type of inputs required. Developers can publish an InputSet that is pre-configured with some or all inputs. This means they can configure partial inputs, helping users verify computations on very large datasets.
For example, suppose a developer creates a system where, given an NFT, it can be proven that an on-chain transfer of ownership includes a specific set of wallets. The developer can have a preconfigured input set that contains a large amount of historical transaction information. The ZK program searches that set to find a matching owner. This is a contrived example that can be implemented in many ways.
Consider another example: a developer is able to write a ZK program that can verify that a key pair signature comes from a key pair or a layer of key pairs, without revealing the public keys of those authorized key pairs. Let's assume that this is useful to many other Dapps, and they use this ZK program. The protocol provides a small tip to the author of this ZK program. Since performance is critical, developers are incentivized to make their programs fast so that operators are willing to run them, and a developer trying to plagiarize another developer's work needs to change something about the program in order to deploy it, because the contents of the ZK program are verified. Any operation added to a ZK program will affect performance, and while this is certainly not infallible, it may help ensure that developers are rewarded for innovation.
Bonsol Architecture
These use cases help describe what Bonsol is used for, but let’s look at its current architecture, current incentive model, and execution flow.

The above image depicts a flow where a user needs to perform some kind of verifiable computation, this is usually implemented through a dapp that requires the user to do something. This will take the form of an execution request, which contains information about the ZK program being executed, the input or set of inputs, the time in which the computation must be proved, and a tip (which is how relays are charged). The request is picked up by relays, and they must race to decide whether they want to claim ownership of this execution and start proving it. Depending on the capabilities of the particular relay operator, they may choose to forgo because the tip is not worth it or the zk program or input is too large. If they decide to perform this computation, they must execute a claim on it. If they are the first to get a claim, then their proof will be accepted until a certain time. If they do not produce a proof in time, other nodes can claim the execution. In order to claim, relays must put up some collateral, currently hardcoded to tip/2, which is slashed if they fail to produce a correct proof.
Bonsol is built on the thesis that more computation will move to a layer where it is verified and validated on-chain, and that Solana will quickly become the chain of choice for VC and ZK . Solana’s fast transactions, cheap computation, and growing user base make it an excellent place to test these ideas.
Is this easy to build? Of course not!
That’s not to say there weren’t challenges when building Bonsol. In order to bring a Risc0 proof to Solana and verify it on it, we need to make it smaller. But we can’t simply do that without sacrificing the security of the proof. So we use Circom to wrap a Risc0 Stark, which can be ~200kb, and then wrap it in a Groth16 proof, which is always 256 bytes in size. Fortunately, Risc0 provides some initial tooling for this, but it adds a lot of overhead and dependencies to the system.
As we started building Bonsol and wrapping Stark and Snark with existing tools, we looked for ways to reduce dependencies and increase speed. Circom allows Circom code to be compiled into either C++ or wasm. We first tried compiling Circom circuits into wasmu files generated by LLVM. This was the fastest and most effective way to make the Groth16 toolkit portable and still fast. We chose wasm because of its portability, and the C++ code relies on the x86 cpu architecture, which means that new Macbooks or Arm-based servers will not be able to use this code. But this became a dead end for us on the timeline we had to work. Because most of our product research experiments are time-limited until they prove their value, we had 2-4 weeks of development time to test this idea. The llvm wasm compiler could not handle the generated wasm code. With more work, we might be able to overcome this, but we tried many optimization flags and ways to make the llvm compiler work as a wasmer plugin to pre-compile this code to llvm, but we were unsuccessful. Since the Circom circuits are about 1.5 million lines of code, you can imagine that wasm would be a lot. We then turned our sights on trying to create a bridge between just C++ and our Rust relay codebase. This was quickly defeated as well, since the C++ contained some x86-specific assembly code that we didn't want to fiddle with. In order to expose the system to the public, we ended up simply spinning up a system that used C++ code but removed some dependencies. In the future, we hope to expand on another line of optimization we're working on. That's actually compiling C++ code into execution graphs. These C++ artifacts that Circom compiles are mostly just modular arithmetic over finite fields with very, very large prime number generators. This showed some promising results for smaller, simpler C++ artifacts, but more work was needed to make it work with the Risc0 system. This is because the generated C++ code is about 7 million lines of code, and the graph generator seems to be hitting stack size limits, and raising those limits seems to produce other glitches that we haven't had time to identify. Although some of these approaches did not yield the desired results, we were able to contribute to the open source project with the hope that at some point these contributions will be merged upstream.
The next set of challenges are more in the realm of design. An important part of the system is being able to have private inputs. These inputs need to come from somewhere, and due to time constraints we couldn’t add some fancy MPC cryptographic system to allow private inputs to close loops in the system. So to address this need and unblock developers, we added the concept of a private input server, which needs to verify that the requester has verified the current claimant via the signature of the payload, and serve them. As we scale Bonsol, we plan to implement an MPC threshold decryption system, by which relay nodes can allow claimants to decrypt private inputs. All this discussion about private inputs led us to an evolution of the design that we plan to make available in the Bonsol repo. That is Bonsolace, a simpler system that enables you as a developer to easily prove these zk programs on your own infrastructure. You can prove it yourself, and then verify it on the same contract as the proof network. This use case is suitable for very high value private data use cases where access to private data must be minimized as much as possible.
One final note about Bonsol that we haven’t seen elsewhere with Risc0 is that we enforce commitments (hashes) on input data as it enters the zk program. We actually check the input that the prover must commit to on the contract to make sure it matches the input that the user expected and sent into the system. This incurs some cost, but not having it means the prover could cheat and run the zk program on inputs that the user didn’t specify. The rest of the development of Bonsol falls into normal Solana development, but it should be noted that we intentionally tried some new ideas there. In the smart contracts, we use flatbuffers as the only serialization system. This is a somewhat novel technology that we hope to grow and make into a framework as it lends itself well to producing cross-platform SDKs. One final note about Bonsol is that it currently requires precompilation to work most effectively, this precompilation is scheduled to be implemented in Solana 1.18, but until then we are working to see if the team is interested in this research and explore other technologies beyond Bonsol.
Summarize
In addition to Bonsol, the Anagram Build team has also been diving deep into many places in the VC space. Projects like Jolt, zkllvm, spartan 2, Binius are some of the projects we are tracking, as well as companies working in the field of Fully Homomorphic Encryption (FHE).
Please check out the Bonsol repository and file issues for examples you need or ways you'd like to extend it. This is a very early project and you have a chance to make your own contributions.
If you are working on an interesting VC project, apply to the Anagram EIR program .






