This strategy not only significantly improves the efficiency of proof generation but also ensures that the generated proof results are completely consistent with the output of high-precision quantization models, balancing performance and credibility, and promoting the practicalization of verifiable machine learning.
Multi-Level Circuit Optimization Strategy
zkPyTorch adopts a highly precise multi-level circuit optimization system, starting from multiple dimensions to ensure the ultimate performance of zero-knowledge inference in terms of efficiency and scalability:
Batch Processing Optimization
By packaging multiple inference tasks for batch processing, it significantly reduces the overall computational complexity, which is particularly suitable for sequence operations in language models like Transformer. As shown in Figure 3, the traditional large language model (LLM) inference process runs with token-by-token generation, while zkPyTorch's innovative method aggregates all input and output tokens into a single prompt process for verification. This approach can confirm the overall correctness of LLM inference at once while ensuring that each output token is consistent with standard LLM inference.
In LLM inference, the correctness of the KV cache (key-value cache) mechanism is crucial for ensuring the credibility of inference output. If the model's inference logic is incorrect, even with caching, it cannot reproduce results consistent with the standard decoding process. zkPyTorch precisely replicates this process, ensuring that each output in zero-knowledge proof has verifiable determinacy and completeness.

Figure 3: Batch verification of large-scale language models (LLMs), where L represents input sequence length, N represents output sequence length, and H represents the hidden layer dimension of the model.
Optimized Primitive Operations
zkPyTorch has deeply optimized underlying machine learning primitives, greatly improving circuit efficiency. For example, convolution operations have always been computationally intensive. zkPyTorch uses an optimization method based on Fast Fourier Transform (FFT), converting convolution originally executed in the spatial domain to multiplication in the frequency domain, significantly reducing computational costs. Additionally, for non-linear functions like ReLU and softmax, the system adopts a pre-computed lookup table approach, avoiding non-linear computations unfriendly to ZKP, greatly improving the runtime efficiency of inference circuits.
Parallel Circuit Execution
zkPyTorch automatically compiles complex ML operations into parallel circuits, fully releasing the hardware potential of multi-core CPUs/GPUs to achieve large-scale parallel proof generation. For instance, when performing tensor multiplication, zkPyTorch automatically splits computational tasks into multiple independent subtasks, distributing them in parallel across multiple processing units. This parallelization strategy not only significantly improves circuit execution throughput but also makes efficient verification of large models a reality, opening a new dimension for scalable ZKML.
Comprehensive Performance Test: Breakthrough in Performance and Precision
Through rigorous benchmark testing, zkPyTorch demonstrates excellent performance and practical usability across multiple mainstream machine learning models:
VGG-16 Model Test
On the CIFAR-10 dataset, zkPyTorch requires only 6.3 seconds to complete proof generation for a single image using VGG-16, with almost no difference in precision compared to traditional floating-point calculations. This marks that ZKML has practical capabilities in classic tasks like image recognition.
Llama-3 Model Test
For the Llama-3 large language model with up to 8 billion parameters, zkPyTorch achieves efficient proof generation of approximately 150 seconds per token. More notably, its output results maintain a 99.32% cosine similarity compared to the original model, ensuring high credibility while maintaining semantic consistency.

Table 1: Performance of various ZKP solutions in convolutional neural networks and transformer networks
Broad Application Scenarios for the Real World
Verifiable Machine Learning as a Service (Verifiable MLaaS)
As the value of machine learning models continues to rise, more AI developers choose to deploy their self-developed models to the cloud, providing Machine-Learning-as-a-Service (MLaaS). However, in reality, users often find it difficult to verify the authenticity of inference results, while model providers also want to protect core assets like model structure and parameters to prevent theft or misuse.
zkPyTorch is precisely designed to solve this contradiction: it gives cloud AI services native "zero-knowledge verification capabilities", achieving encryption-level verifiability of inference results.
As shown in Figure 4, developers can directly integrate large models like Llama-3 into zkPyTorch to build a trustworthy MLaaS system with zero-knowledge proof capabilities. By seamlessly integrating with the underlying ZKP engine, zkPyTorch can automatically generate proofs without exposing model details, verifying whether each inference is correctly executed, thereby establishing a truly trustworthy interaction foundation for both model providers and users.

Figure 4: Application scenario of zkPyTorch in Verifiable Machine Learning as a Service (Verifiable MLaaS).
Safe Escort for Model Valuation
zkPyTorch provides a secure and verifiable AI model evaluation mechanism, allowing stakeholders to carefully assess key performance indicators without exposing model details. This "zero-leakage" valuation method establishes a new trust standard for AI models, improving commercial transaction efficiency while safeguarding developers' intellectual property. It not only increases the visibility of model value but also brings greater transparency and fairness to the entire AI industry.
Deep Integration with EXPchain Blockchain
zkPyTorch is natively integrated with the EXPchain blockchain network independently developed by Polyhedra Network, jointly constructing a trustworthy decentralized AI infrastructure. This integration provides a highly optimized path for smart contract calls and on-chain verification, enabling AI inference results to be cryptographically verified and permanently stored on the blockchain.
With the synergy of zkPyTorch and EXPchain, developers can build end-to-end verifiable AI applications, truly implementing transparent, trustworthy, and auditable AI computing processes from model deployment and inference calculation to on-chain verification, providing underlying support for the next generation of blockchain+AI applications.
Future Roadmap and Continuous Innovation
Polyhedra will continue to advance zkPyTorch, focusing on the following directions:
Open Source and Community Co-building
Gradually open-source zkPyTorch's core components, inspiring global developers to participate and promote collaborative innovation and ecosystem prosperity in the zero-knowledge machine learning field.
Expanding Model and Framework Compatibility
Broaden support for mainstream machine learning models and frameworks, further enhancing zkPyTorch's adaptability and universality, enabling flexible embedding in various AI workflows.
Development Tools and SDK Construction
Launch comprehensive development toolchains and Software Development Kits (SDKs) to simplify integration processes and accelerate zkPyTorch's deployment and application in practical business scenarios.
Conclusion
zkPyTorch is an important milestone towards a trustworthy AI future. By deeply integrating the mature PyTorch framework with cutting-edge zero-knowledge proof technology, zkPyTorch not only significantly improves the security and verifiability of machine learning but also reshapes the deployment methods and trust boundaries of AI applications.
Polyhedra will continue to innovate in the "Secure AI" field, pushing machine learning towards higher standards in privacy protection, result verifiability, and model compliance, helping to construct transparent, trustworthy, and scalable intelligent systems.
Please stay tuned for our continuous updates and witness how zkPyTorch will reshape the future of the secure intelligent era.
Click to learn about BlockBeats job openings
Welcome to join the official BlockBeats community:
Telegram Subscription Group: https://t.me/theblockbeats
Telegram Communication Group: https://t.me/BlockBeats_App
Twitter Official Account: https://twitter.com/BlockBeatsAsia


