The "Black Box Nature" of LLM
In fact, in general, LLM models have a series of characteristics such as high complexity, commercial closedness, data invisibility, and the inherent interpretability of deep learning models, so when LLM service providers provide API services to users, the internal operation logic of their models is usually invisible. The entire model presents itself as a completely black box system to the outside world, that is, users can only send requests and receive responses through the API, without being able to directly access or understand the specific computation process, parameter weights or training mechanism within.
This widespread "black box nature" is making users face two potential core issues when using large models or integrating APIs:
One is the consistency issue.
The system prompts are provided by the developers, which directly affect the model's behavior, for example, the model may have a certain bias in its final reasoning due to the preference of certain prompts, thereby affecting the reasoning results.
Similarly, when users make API calls, they are usually unable to verify whether the actual system prompts used have been tampered with, which may cause the model's behavior to deviate from expectations.
One is the privacy issue.
System prompts often include highly sensitive business information, such as pricing strategies, risk control rules, internal processes, etc., which usually involve the core competitiveness of the enterprise, so developers are unwilling to disclose them.
At present, although the TLS (Transport Layer Security) scheme is used to ensure the encryption of data during transmission, that is, to ensure that the data will not be eavesdropped or tampered with, TLS cannot prove whether the system prompts actually executed on the server side have been tampered with. This means that even if the API communication is secure, users still cannot verify whether the prompts used by the LLM are consistent with what the developers have promised. If developers want to prove to third parties or partners that their relevant AI services are trustworthy, they usually need a mechanism to guarantee the integrity of the system prompts, but traditional TLS cannot provide this capability, which also makes most LLMs unable to guarantee the credibility of the prompts.
For this reason, the application of LLM in scenarios with high requirements for compliance, privacy and security, such as finance and medical care, faces certain limitations. With the launch of the zkPrompt solution based on zkTLS technology by Zypher Network, it is expected to become a key to break through the bottleneck. This solution not only can effectively protect the privacy of system prompts based on ZK schemes, but also can verify the consistency of system prompts in each API call, which will play a crucial role in promoting the widespread application of LLM in more industry fields.
Zypher Network's zkPrompt Solution
Zypher Network is a set of ZK technology-based co-processing facilities, aiming to provide ZK services for all application scenarios and facilities that require zero-knowledge proofs. The Zypher Network system consists of an off-chain computing network composed of distributed computing nodes and an on-chain engine called Zytron. When a zero-knowledge computing task appears in the Zypher network, the system will delegate the computing task to the computing miners and generate a ZKP. This ZKP can be verified on the chain to ensure the credibility and honesty of data, transactions, and behaviors. At the same time, the distributed computing network not only significantly reduces the system's computing costs, but also endows the network with excellent scalable computing capabilities.
Based on this, Zypher Network has launched the zkPrompt solution dedicated to LLM services, further expanding it as an important trusted and privacy infrastructure for the AI field. The zkPrompt solution is centered on its unique zkTLS technology, by combining the traditional TLS protocol with ZK technology, allowing users to verify the authenticity of data without exposing sensitive data. This innovation effectively compensates for the lack of data proof capability of traditional TLS, and provides a higher level of credibility for the operation of AI under the premise of ensuring privacy.
zkTLS, ZK + TLS
TLS (Transport Layer Security) is a widely used encryption protocol aimed at ensuring the security of data transmission in computer networks. By encrypting and verifying the data during transmission, TLS can effectively prevent the data from being stolen, tampered with or forged during the transmission process, and is usually applied to various Internet communication scenarios, such as web browsing, email and instant messaging, to ensure the privacy and data integrity of both parties.
The basic principle of the TLS protocol combines symmetric encryption and asymmetric encryption: the two communicating parties first authenticate each other through asymmetric encryption and exchange encryption keys, and then use symmetric encryption to encrypt the data, thereby improving the encryption efficiency. At the same time, TLS also uses message authentication codes (MACs) to verify the integrity of the data, ensuring that the data has not been tampered with or damaged during transmission.
In the application of LLM, the API call process between the client and the server is usually based on the TLS encryption protocol to ensure the security of the LLM's API service, prevent data from being stolen or tampered with, and thus ensure the privacy and integrity of the model when processing user requests. This gives LLM basic security protection when dealing with sensitive information, ensuring the confidentiality of the communication.
Combining cryptographic schemes with the TLS encryption protocol is expected to improve the consistency and privacy issues faced by LLM models. In fact, zero-knowledge proof is a good solution in itself, as it allows one party (the prover) to prove to another party (the verifier) that a statement is true without revealing any additional information, although the TLS protocol can guarantee the integrity and confidentiality of data transmission, it is difficult to provide proof of the integrity and authenticity of the data to a third party, so by using the ZK scheme, it is possible to prove the integrity and authenticity of the data to a third party while protecting privacy.
Of course, to achieve the above objectives, zkTLS usually introduces a trusted third party (often referred to as a Verifier or Notary), which can verify the interaction without compromising the original connection security. According to the different technical routes, zkTLS is mainly divided into three modes:
- TEE-based mode: The TLS protocol runs securely in the TEE and the TEE provides proof of the session content.
- MPC-based mode: Usually adopts a 2-MPC model, i.e. introducing a Verifier. In this mode, the prover and verifier jointly generate the session key through MPC, which is equivalent to being divided into two parts, each held by the two parties, and finally the prover can selectively disclose some information to the verifier.
- Proxy-based mode: The proxy (Verifier) acts as an intermediary between the client and the server, responsible for forwarding and verifying the encrypted data exchanged between the two parties during the communication process.
Zypher Network itself contains a scalable and low-cost off-chain computing network, and at the same time it also contains an on-chain AI engine called Zytron, which not only deploys a large number of pre-compiled contracts, but also builds a sharded, dedicated P2P on-chain communication node network for contract verification. By connecting the P2P network, it ensures that the nodes can communicate directly and efficiently, reducing the intermediate transmission steps and making the data transmission faster. And the communication and address location between nodes use the Kademlia algorithm, this structured design makes the nodes faster and more accurate in finding and contacting other nodes.
In terms of execution, Zytron also partitions the execution process of contracts based on the node distance rules defined in the Kademlia algorithm, which means that different parts of the contract will be assigned to different network nodes for execution based on the distance between the nodes. This distance-based allocation method helps to evenly distribute the computing load in the Zytron network, thereby improving the speed and efficiency of the entire system.
Here is the English translation of the text, with the specified terms preserved:Thanks to the performance and cost advantages, Zypher Network has adopted a proxy-based zkTLS implementation in zkPrompt, which not only avoids the additional computational overhead of multi-party computation protocols, but also avoids the hardware costs related to TEE.
How does zkPrompt work?
Focusing on the zkPrompt solution itself, in its proxy mode, the verifier acts as an intermediary between the client and the large model server, responsible for forwarding the TLS traffic and recording all ciphertext data exchanged between the two parties. At the end of the session, with the support of the off-chain computation network, the client generates a ZKP based on the recorded ciphertext, allowing any third party to verify the consistency of the system prompt in the TLS session without exposing the prompt content or any sensitive information.
Before starting any interaction, the Client commits to the system prompt, i.e., the system prompt is processed through encryption and a commitment value is generated, which is stored on the blockchain, ensuring that the prompt cannot be tampered with in subsequent operations. This commitment value serves as proof to ensure that the system prompt remains consistent and does not change during the subsequent interactions.
When the Client sends a request to the LLM model through the Proxy, the Proxy acts as an intermediary between the client and the server. It not only forwards the TLS traffic, but also records all encrypted data packets exchanged between the two parties. In this process, the Proxy generates a commitment value for the request and stores it on the chain, ensuring the integrity and consistency of each request data packet. The purpose of this process is to ensure that the request data and the system prompt are not tampered with.
When the LLM service returns a response, the Proxy also records the response data packets and generates a commitment value for the response. These response commitment values are also stored on the chain to ensure that the response content is consistent with the expected. In this way, the system can verify whether the response has been tampered with during transmission, further ensuring the integrity and reliability of the data.
At the end of the session, the Client will generate a zero-knowledge proof (ZKP) based on all the encrypted records, which can allow any third party to verify the consistency of the system prompt in the TLS session without exposing the specific content of the prompt or other sensitive information. This approach effectively protects the privacy of the prompt while ensuring that the system prompt has not been tampered with throughout the communication process.
The generated zero-knowledge proof is then submitted to the on-chain smart contract and verified by the Zytron engine. Through this verification process, it can be confirmed whether the prompt content has been tampered with and whether the LLM model has executed according to the predetermined behavior. If the prompt content is tampered with or the execution behavior does not match the initial setting, the verification will fail, thereby promptly identifying and preventing any non-compliant or potential risks.
Zypher's Zytron engine provides a strong guarantee for the reliability of Prompt, ensuring that the LLM model always runs as expected and avoiding the risks brought by external interference or tampering. This verification mechanism not only enhances the credibility of the system, but also provides important security protection for the zkPrompt solution, making it more robust for applications in highly compliant domains.
In terms of features, zkPrompt can ensure that the LLM:
- Data privacy: Users can verify the correctness of the prompt without seeing or understanding the specific content of the system prompt, protecting the sensitivity of the prompt.
- Reliability and transparency: Through zero-knowledge proof, users can trust that the AI's behavior has not been maliciously tampered with.
- Distributed verification: Any user or third party can confirm the consistency of the prompt and the model through the verification process, without relying on a centralized entity.
Based on zkPrompt, it can not only ensure the reliability of Prompt, but also further extend to Proof of Inference, which can also ensure that the reasoning process of the LLM is reliable and the reasoning results are generated based on legitimate inputs.
It is worth mentioning that Zypher Network's zkPrompt is presented in the form of an easy-to-use SDK, its core relying on a set of advanced encryption schemes, including strong encryption, Pedersen commitment, and zkSnarks (Plonk) cryptographic primitives. Zypher can flexibly adapt different zero-knowledge schemes for different characteristics of LLM models to ensure the optimal effect for each LLM.
zkInference
In addition to zkPrompt, Zypher Network has also pioneered the zkInference framework based on ZKP schemes, which uses zero-knowledge proof algorithms to ensure that AI Agents strictly follow predefined rules or AI model operations, ensuring that their decision-making processes comply with the principles of fairness, accuracy, and security, making the behavior of AI Agents verifiable without exposing the underlying models or data. Therefore, zkInference effectively prevents collusion and malicious behavior between multiple AI agents, ensuring the fairness and security of a series of scenarios including Web3 games and AI Agents.
The zkInference framework is more suitable for lightweight models that need to perform basic and deterministic tasks, such as AI bots in Web3 games.
Overall, the key features of the zkInference framework can be summarized as:
- Verifiability: Use zero-knowledge proof technology to verify the behavior of AI Agents without exposing the underlying models or data.
- Anti-collusion: Effectively prevent collusion between different AI Agents, ensuring a fair gaming experience.
- Unlimited computing power: Provide a decentralized mining pool market to provide unlimited computing resources for verifiable AI Agents.
Use Cases of Zypher Network's Trusted Framework
Alpha Girl
Alpha Girl is the first trustless multi-modal AI agent based on Zypher's Proof of Prompt framework, aiming to predict the market behavior of Bitcoin using real-time market data. As a pioneering AI solution, Alpha Girl uses advanced algorithms and data analysis to help users better understand and predict market trends. This AI agent was developed by a well-known Prompt Engineering team over 3 months and is now launched to the market, currently supporting Bitcoin price prediction. According to the test results, Alpha Girl's trend prediction accuracy reaches 72%, and its strategy can bring a 25% excess return compared to the hold-and-wait strategy.
By integrating Zypher Network's zkPrompt solution, Alpha Girl's AI agent model can ensure the consistency and correctness of the system prompts used, without revealing any underlying data, ensuring the transparency and reliability of each prediction, and guaranteeing a high match between the prediction results and expectations.
As an early example of a trusted AI agent, Alpha Girl demonstrates how to ensure the transparency and verifiability of the prediction process through the technologies provided by Zypher Network. Zypher Network is expected to provide guarantees for prediction tools in the cryptocurrency market, and has also set a benchmark for similar AI agents in terms of privacy protection and data security.
Trusted Framework for AI Agent Game Engine
Zypher Network has also practiced in the on-chain gaming domain. Currently, it has launched the Game Engine component, and the game agents developed use smart contracts to perform game operations, and use zkPrompt to ensure the fairness of behavior between different players.
Followin' developers can use native game engines such as Cocos Creator, Unity, and Unreal to create on-chain games with low barriers. These tools support the core state management of the game, ensuring real-time updates and verification of game states through interfaces with decentralized data management layers. Game state management includes input data, generated content, and test results, all of which are processed by multiple AI Agents such as content generation proxies and game testing proxies to optimize the gaming experience and ensure data accuracy.
The game's input data, generated content, and test feedback are transmitted to the decentralized game data management and storage layer. In this layer, the data is used to support the execution of game logic and verified through zero-knowledge proof using the ZK Game SDK integrated with zkPrompt, ensuring the data's immutability and authenticity. Based on the decentralized proof protocol, the game data is processed and submitted through an encrypted mining pool and verified by the blockchain network, ensuring that all game operations are recorded transparently and securely.
This technology stack further combines an optimized resource layer to provide optimized computing and storage resources, enabling efficient collaboration among all participating AI agents, such as content generation proxies, game testing proxies, and data insight proxies. Ultimately, this system not only provides the efficient computing power required for game development but also ensures the transparency and fairness of each game operation through a decentralized verification mechanism, preventing any tampering or unfair behavior.
Additionally, proxy players can stake in the "LP" structure and share the game's rewards, i.e., game mining, with other stakers. This not only supports cross-platform gameplay (including mobile and desktop), but the revenue-sharing mechanism of the LP structure also provides players with more earning opportunities, allowing them to increase their earnings through collaboration with the LP. Currently, games based on this Game Engine component include Protect T-RUMP, Zombie Survival, and Big Whale, among dozens of others.
At this stage, Zypher's zkPrompt solution is also exploring more domains to further promote the large-scale adoption of LLM and AI Agents in a private and trustworthy manner.
Overall, the AI field is still in its early stages of development. Although applications such as LLM and AI Agents have made preliminary progress, they are still in the exploratory phase and face many challenges. The lack of trustworthiness and verifiability due to the black-box nature is becoming a major bottleneck hindering their further development. The series of solutions proposed by the Zypher Network are gradually becoming the key to breaking this deadlock, not only providing a trustworthy framework for the adoption of LLM and AI Agents but also paving the way for their application in a wider range of industries. These solutions are expected to significantly improve the reliability and transparency of AI systems, laying a solid foundation of trust for the widespread application of AI.


