Written by Eito Miyamura
Compiled by: TechFlow
GatlingX, a project led by Oxford University alumni focusing on machine learning and reinforcement learning, recently launched the “GPU-EVM” — which, according to internal benchmarks, may be the most powerful Ethereum Virtual Machine (EVM) currently available.
GPU-EVM is an EVM scaling solution that is so powerful that state-of-the-art reinforcement learning (RL)-based artificial intelligence agents can be trained on it, the development team said. It takes advantage of parallel execution of multiple Ethereum applications to help train AI agents to find security vulnerabilities.
GPU-EVM uses graphics processing units (GPUs) to perform operations in parallel, thereby increasing transaction throughput. The team claims that GPU-EVM can process tasks almost 100 times faster than current high-performance EVMs (including evmone and revm). This is mainly due to the GPU's ability to process multiple operations simultaneously, taking advantage of its architecture that is naturally suitable for parallel processing.

GPU-EVM uses the power of graphics processing units (GPUs) to run Ethereum Virtual Machine (EVM) operations in parallel. This means that instead of performing tasks sequentially, GPU-EVM can handle many tasks at the same time, significantly speeding up calculations. This breakthrough by a team of Oxford University computer science/artificial intelligence alumni has significantly improved the unit economics of Ethereum Virtual Machine calculations per second.
The Ethereum Virtual Machine (EVM) is an industry-standard virtual machine that runs smart contracts and is the foundation of modern blockchain technology. EVM is similar to an operating system for blockchains, enabling transactions across many distributed computers without the need for trusting a third party through its CPU-based client software.
With GPU-EVM and the performance enhancements it provides, it brings a huge capability boost to ambitious engineering teams downstream: providing infrastructure for AI/RL models interacting with EVM, accelerating L2, MEV, backtesting, etc. (See below for details)
GPU-EVM: A New Paradigm for EVM Computing
Nvidia started out as a niche company focused on gaming but has become a key player in computing, at the forefront of the AI revolution. This evolution reflects the shift from Moore’s Law, which predicted a doubling of computing power every two years, to Huang’s Law, named after Nvidia CEO Jensen Huang. Huang’s Law states that GPU performance will more than double in two years, surpassing CPUs due to the integration of hardware, software, and AI, making GPUs central to accelerating complex tasks.
As we reach the limits of Moore's Law, reliance on GPU parallelism heralds a new era of computing, a transition from CPU-dominated to GPU-driven progress (see Dennard scaling , Amdahl's Law ). This shift is like moving from a single-lane road to a multi-lane highway, not only speeding up processes, but also enabling more simultaneous activities, thereby expanding what is technologically possible.
The Jevons Paradox illustrates this effect well: just as the efficiency of LED light bulbs led to greater rather than less usage, the enhanced efficiency and reduced cost of GPU-EVM opens up a host of new possibilities. More than just saving resources, it spurs innovation and adoption in blockchain technology and beyond, promising a future where the efficiency of GPU computing drives exponential growth in computing applications.
GPU-EVM Performance
Leveraging the significant advances in general-purpose computing power of modern GPUs, we have brought the performance of the GPU-EVM up to that of traditional The EVM is more than 100 times faster than the EVM. Modern GPUs are designed with thousands of cores that can handle multiple operations simultaneously, making them ideal for parallel processing tasks. This inherent architectural advantage enables GPU-EVM to execute a large number of EVM instructions in parallel, greatly speeding up computing speed and efficiency.
In order to objectively measure the performance improvement brought by GPU-EVM, we performed a comprehensive benchmark using the open source tool provided by EVM Bench . This tool allows us to simulate various EVM operations and compare the execution time between the traditional CPU-based EVM and our GPU-EVM.

Compared to traditional computing paradigms, GPU-EVM takes full advantage of the GPU's unparalleled processing power, setting a new benchmark for EVM performance and efficiency.
With this technical foundation, let’s explore how GPU-EVM can revolutionize areas such as AI training and DeFi simulation, opening up new frontiers for blockchain applications.
Training AI Agents Using EVM
Artificial intelligence is changing the world, led by ChatGPT and other LLM chatbots, which are trained through reinforcement learning with human feedback, applying knowledge from reinforcement learning (RL). At its core, RL embodies the process of training AI agents to make decisions by interacting with an environment that rewards correct behavior. This learning approach is critical because it mirrors the fundamental way humans and animals learn from their surroundings, making it a cornerstone for the development of intelligent systems that can adapt and optimize their behavior.
AlphaGo’s landmark victory over the world champion of Go demonstrated the transformative power of RL. This was more than just a game; it demonstrated how, through RL, AI can discover strategies and solutions beyond human insight, through simulation and interaction with the complex environment of a Go board. This breakthrough highlights the essence of RL: enabling AI agents to autonomously navigate and learn from their environment to achieve specific goals, guided by a reward system.
However, the journey to achieve such AI breakthroughs through RL has been fraught with computational challenges. Simulating environments for AI requires significant computational resources. The emergence of GPU-parallelized simulation environments, such as NVIDIA’s Isaac Gym, Google’s Brax, and JAX-LOB, has played a key role in overcoming these obstacles. By leveraging GPU-parallelized simulation environments, these platforms have achieved performance gains ranging from 100x to 250,000x, making the computational aspects of RL more feasible and efficient. Since the bottleneck for AI training is often the CPU-GPU communication bandwidth to pass data, GPU parallelization enables these speed improvements and has become the industry standard in the RL research community.
In the rapidly evolving world of artificial intelligence, GPU-EVM, as a GPU-parallelized simulation environment, facilitates the training of AI agents directly within the blockchain ecosystem. One notable application is in the financial industry, where GPU-EVM can revolutionize real-time fraud detection systems. History shows the importance of these systems, with Max Levchin developing PayPal’s first fraud prevention mechanism to prevent the company from going bankrupt. By enabling financial AI to simulate and analyze millions of transactions in just seconds, it can identify anomalous patterns of fraudulent activity with unprecedented speed and accuracy. This capability, which previously could have taken days to achieve, represents a major shift in how financial institutions prevent fraud. By integrating AI agents with the EVM into the GPU-EVM, new avenues are opened for applying reinforcement learning (RL) principles within the blockchain space. Here, AI agents learn and improve by accurately identifying fraudulent transactions based on a predefined reward function.
L2 Acceleration/Emulation
The emergence of second-layer solutions is critical to increasing Ethereum's throughput, thereby promoting its adoption in mainstream applications, especially in the payment field. By processing transactions outside the main Ethereum blockchain (layer 1), L2 significantly enhances the capacity of the network while maintaining its fundamental principles of security and decentralization. Unlike traditional CPU-based systems, GPU-EVM operates independently and is able to seamlessly integrate and accelerate existing L2 solutions. This acceleration can be achieved through various methods, including optimizing view functions and applying algorithms such as Monte Carlo Tree Search for more efficient block construction and transaction ordering.
However, the role of exploiting the parallel EVM in the context of L2 acceleration is complex and needs to be taken seriously. Directly accelerating the L2 via the parallel EVM is not as simple as it seems. To truly exploit the power of the parallel EVM, a concerted effort must be made to innovate the design of L2 solutions and their databases. This point is highlighted by work such as the following:
While the nuances of integrating GPU-EVM with L2 solutions are extremely promising, it is important to note that there are other challenges that need to be addressed. The main bottlenecks in this endeavor include addressing storage-related limitations, managing long chains of interdependent transactions, and reducing state bloat costs. GPU-EVM alone cannot solve all of these issues. Therefore, working together through innovative designs of L2 solutions and the databases that support them in the context of L2 acceleration is key to overcoming these obstacles and fully realizing the benefits of GPU-EVM.
DeFi Simulation/Fuzz Testing
The fundamental performance improvement of GPU-EVM has brought revolutionary changes to DeFi simulation and fuzz testing. This significant increase in data processing power has made it possible to discover edge cases of DeFi strategies and protocol designs that were not previously considered, revealing new vulnerabilities that may be hidden. To illustrate the significance of this progress, traditional CPU-based methods can be compared to water guns, while GPU-EVM is more like a powerful faucet, providing a more effective means of exterminating insects.
Due to the underlying performance gains of the GPU-EVM, fuzzers running on this platform can explore deeply and run at amazing speeds, identifying edge cases in seconds. This is in stark contrast to CPU-based fuzzers, which can take weeks or even months to discover the same issues. The ability to run these advanced fuzzers on top of the GPU-EVM allows for continuous monitoring of smart contracts, especially those in actual production. These automated systems are designed to relentlessly challenge smart contracts, trying to foresee potential vulnerabilities several moves ahead, like a strategic game of chess, with the ultimate goal of ensuring the highest level of security.
Our upcoming product embodies this cutting-edge DeFi emulation and fuzz testing approach. Stay tuned as it redefines the standard for smart contract security and resilience.
About GatlingX
GatlingX is an application infrastructure and AI lab focused on developing heavy technical infrastructure. Our mission is to create a variety of blockchain application products that can operate at a deep infrastructure level.
We believe that there are some extremely hard technical problems that the blockchain industry is unwilling to solve because they are too hard. Fast and cheap security, computational performance, and speed are necessary prerequisites for a thriving blockchain ecosystem, but they are also extremely hard problems that cause a lot of pain. We believe that unless we bring together the best problem solvers in the world to solve it, no one will solve these problems.
We are committed to advancing the latest technologies in areas such as artificial intelligence, GPU, blockchain, and distributed computing, which are critical to driving global technological progress.
We're a bunch of enthusiasts: if we can buy something off the shelf, we'll do it. If we can't, we'll build it ourselves.
Using GPU-EVM
GPU-EVM is currently in private early access as we expand GPU capacity. If you are interested in using GPU-EVM in your engineering work, please fill out this form to join the waitlist.
Our team is small but extremely talented. Our founding team consists of Oxford University alumni who have made groundbreaking achievements in infrastructure and applied AI, worked at companies such as Crowdstrike, Wayve, Citadel Securities, and created far-reaching projects such as ZKMicrophone and Graphite.



