Zig-EVM Performance EVM Infrastructure

5-6x EVM Speedup: How We Built a Parallel Execution Engine in Zig

Zig-EVM uses wave-based parallel transaction execution to deliver 1000 transactions in 162ms. Here's the architecture behind the performance.

DS
Dipankar Sarkar
28 min read 5,518 words

Zig-EVM is an open-source, MIT-licensed Ethereum Virtual Machine implementation written in Zig that achieves a 5-6x throughput improvement over sequential execution through wave-based parallel transaction processing. On an 8-core machine, it executes 1000 transactions in 162ms compared to 970ms for the equivalent sequential run, using a combination of O(n) dependency analysis, speculative execution on a work-stealing thread pool, and conflict-driven rollback. It implements 141 EVM opcodes, full 256-bit arithmetic, Ethereum-compatible gas metering, and nested execution contexts (CALL, DELEGATECALL, STATICCALL), with FFI bindings for C, Python, Rust, and JavaScript.

TL;DR

  • What: A complete EVM implementation in Zig with 141 opcodes and full gas metering.
  • Why Zig: Explicit allocators, comptime metaprogramming, zero-cost C ABI, no hidden control flow — all critical for deterministic, embeddable execution engines.
  • How it’s fast: Wave-based parallel execution groups independent transactions into concurrent batches using O(n) dependency analysis and a work-stealing thread pool with speculative execution and rollback.
  • Benchmarks: 1000 transactions in 162ms parallel vs 970ms sequential (5.98x speedup on 8 threads). DEX swap workloads see 5.06x. Mixed workloads hit 5.56x.
  • Embeddable everywhere: FFI bindings for C (zigevm.h), Python (ctypes), Rust (crate with ethers/serde features), and JavaScript (N-API via npm install zigevm).
  • Use cases: L2/rollup execution engines, EVM research, transaction simulation, blockchain indexing.
  • Open source: github.com/cryptuon/zig-evm under MIT license.

Why Zig? A Systems Language Decision for EVM Execution

Choosing a language for a production EVM is a consequential architectural decision. The EVM has specific properties that constrain the design space: deterministic execution, exact 256-bit integer arithmetic, precise gas accounting, and the need to embed in diverse host environments. We evaluated Rust, Go, C++, and C before settling on Zig. Here is the reasoning.

Explicit Allocators

Every function in Zig that allocates memory takes an Allocator parameter. This is not a convention — it is enforced by the language. For an EVM, this is transformative.

EVM execution is inherently scoped: a transaction starts, allocates stack frames, memory pages, and storage buffers, and then finishes. With explicit allocators, we use arena allocation per transaction. When the transaction completes (or rolls back), we free the entire arena in a single operation. There is no tracing garbage collector walking the heap, no reference counting overhead, and no fragmentation from long-running allocator state.

In Rust, the global allocator is the default. You can use custom allocators, but the ecosystem (including most crates) is not designed around them. In Go, allocation is implicit and managed by the GC. In C++, allocator-aware containers exist but are notoriously difficult to use correctly across library boundaries. Zig’s approach means every dependency we write or import is allocator-aware by default.

For parallel execution, this matters even more. Each thread gets its own arena allocator. There is zero allocator contention between threads — no mutex on malloc, no GC stop-the-world pauses. This is a meaningful contributor to our parallel scaling efficiency.

Comptime: Zero-Cost Metaprogramming

Zig’s comptime keyword allows arbitrary computation at compile time. We use this extensively in the EVM implementation:

  • Opcode dispatch tables are generated at comptime. The 141-opcode handler table is a compile-time-constructed array of function pointers, with zero runtime initialization cost.
  • Gas cost tables are computed and validated at comptime. If a gas cost constant is inconsistent with the Ethereum Yellow Paper specification, the build fails — not a test, not a runtime assertion, but a compile error.
  • BigInt operations use comptime to select optimized code paths based on operand characteristics known at compile time, avoiding branches in hot loops.

Rust has procedural macros that serve a similar purpose, but they operate on token streams and are substantially more complex to write and debug. C++ templates offer compile-time computation, but with error messages that are famously opaque. Zig’s comptime is plain Zig code running at compile time, with the same debuggability as runtime code.

Zero-Cost C ABI

Zig functions can be exported with C calling convention using export and callconv(.C). The resulting shared library is indistinguishable from one compiled by a C compiler. There is no runtime, no initialization function, no special linking requirements.

This gives us FFI bindings for free. The zigevm.h header is generated directly from the Zig source. Python loads the .so via ctypes. Rust links against it as a standard C library. Node.js wraps it with N-API. Every binding is a thin layer over the same native code — no serialization overhead, no IPC, no embedded interpreter.

Rust can do this with #[no_mangle] and extern "C", but the Rust runtime still needs initialization (std::rt::lang_start), and many Rust types do not have stable ABI representations. Go’s cgo has significant overhead per call and GC interactions that make it unsuitable for high-frequency EVM execution. C++ name mangling and ABI instability make cross-language embedding fragile.

No Hidden Control Flow

Zig has no operator overloading, no hidden function calls from constructors/destructors, no exceptions, and no implicit conversions. When you read Zig code, what you see is what executes.

For an EVM, this is a correctness property. Gas metering requires that we know exactly how many host-machine operations occur for each EVM opcode. Hidden allocations, implicit copies, or exception unwinding would make gas accounting unpredictable. In Zig, every allocation is visible in the source, every error is explicitly handled with try or catch, and control flow is always apparent from reading the code.

The Language Comparison Summary

CriterionZigRustGoC++C
Explicit allocatorsLanguage-enforcedOpt-in, ecosystem frictionNo (GC)Possible but painfulManual (malloc/free)
Compile-time computationcomptime (full language)Proc macros (token streams)NoTemplates (complex errors)Preprocessor only
C ABIZero-cost, nativePossible with extern "C"cgo overheadName mangling issuesNative
Hidden control flowNoneRAII destructors, panicsGoroutine scheduling, GCConstructors, exceptions, operatorsNone
Memory safetyRuntime safety checks (removable)Borrow checker (compile-time)GCManualManual
Build systemBuilt-in (zig build)Cargogo buildCMake/Meson/etc.Make/CMake

Zig gives us the performance characteristics of C, the metaprogramming power to rival Rust macros, and the FFI story that neither Go nor C++ can match — with a codebase that remains readable to anyone who knows C.


Architecture: From Sequential to Parallel Execution

The Sequential Baseline

Every production EVM today — geth, reth, Besu, Nethermind — processes transactions sequentially. Transaction 0 executes to completion, its state changes are committed, and then transaction 1 begins. This model is simple and correct: it matches the Ethereum specification, where transaction ordering within a block is deterministic and each transaction sees the state produced by all preceding transactions.

The cost is throughput. On our benchmark hardware (8-core AMD Ryzen, 32GB RAM), sequential execution of 1000 simple ETH transfer transactions takes approximately 970ms. For a block containing DEX swaps with their heavier storage access patterns, execution time rises further. This is the fundamental bottleneck for L2 rollups that need to process thousands of transactions per block with sub-second latency.

The sequential model leaves 7 of 8 cores idle. Zig-EVM’s parallel execution engine is designed to use them.

Step 1: Dependency Analysis (O(n) Complexity)

Before executing any transaction, Zig-EVM performs a dependency analysis pass over the entire transaction batch. The goal is to determine which transactions can safely execute in parallel — that is, which transactions operate on disjoint state.

The algorithm works as follows:

  1. Read/write set extraction: For each transaction, we extract the set of storage slots and account addresses it will read from and write to. For simple transfers, this is the sender and receiver addresses. For contract calls, we use static analysis of the bytecode to extract accessed storage keys. When static analysis is insufficient (dynamic storage access via computed keys), we conservatively mark the transaction as touching “unknown state,” which forces it into a sequential dependency chain.

  2. Conflict detection: Two transactions conflict if one writes to a storage slot or account that the other reads from or writes to. We build a conflict graph where edges represent dependencies. This is implemented as a hash map scan — for each transaction’s write set, we check whether any preceding transaction’s read or write set intersects. The hash map lookups make this O(n) in the number of transactions with O(k) per transaction where k is the size of the read/write set.

  3. Dependency chain construction: From the conflict graph, we compute a partial ordering of transactions. Transactions with no conflicts are independent and can execute in any order. Transactions with conflicts must execute in their original order relative to each other.

The key insight is that in real-world Ethereum blocks, most transactions are independent. Two users transferring ETH to different addresses have no state overlap. Even in blocks with heavy DEX activity, the conflict density is typically low enough to extract significant parallelism.

Step 2: Wave Construction

Given the dependency analysis, we partition transactions into “waves.” A wave is a set of transactions that have no mutual conflicts and can therefore execute in parallel.

The wave construction algorithm is greedy:

  1. Start with all transactions in a “pending” pool.
  2. Select all transactions from the pending pool that have no unresolved dependencies (i.e., all their predecessors in the dependency graph have already been assigned to a previous wave).
  3. Assign these transactions to the current wave.
  4. Mark these transactions as resolved.
  5. Repeat until the pending pool is empty.

For a batch of 1000 simple transfer transactions with no conflicts, the entire batch forms a single wave and executes fully in parallel. For a worst-case batch where every transaction conflicts with every other (e.g., 1000 transactions all touching the same DEX pool), the batch degenerates to 1000 sequential waves of 1 transaction each — equivalent to sequential execution.

Real workloads fall between these extremes. A typical mixed block of 1000 transactions might decompose into 15-30 waves, with the largest waves containing 200+ transactions.

Step 3: Work-Stealing Thread Pool

Each wave is dispatched to a work-stealing thread pool. The pool is initialized at startup with one worker thread per available CPU core (configurable). Work-stealing means that when a thread finishes its assigned transactions, it steals pending work from other threads’ queues rather than going idle.

The thread pool implementation is built on Zig’s std.Thread primitives. Each worker thread has:

  • A private arena allocator: All memory allocated during transaction execution uses this arena. There is no cross-thread allocation contention.
  • A local work queue: Transactions assigned to this thread are placed in a lock-free deque.
  • A steal-from pointer: When the local queue is empty, the thread scans other workers’ deques and steals from the back (the end opposite to where the owner pushes/pops).

This architecture ensures that core utilization remains high even when transactions within a wave have varying execution costs. A wave with 200 simple transfers and 50 complex contract calls will see the threads that finish the simple transfers early steal contract-call work from busier threads.

Step 4: Speculative Execution and State Versioning

Within a wave, transactions execute speculatively against a versioned state snapshot. Each transaction receives a read-only view of the pre-wave state and produces a write set (a set of state changes). These write sets are not immediately committed to the global state.

The speculative execution model works as follows:

  1. Snapshot: Before wave execution begins, the current state is snapshotted. This is a logical snapshot — we use copy-on-write semantics rather than physically copying the state tree.
  2. Execute: Each transaction in the wave executes against the snapshot, producing a local write set. The transaction sees the pre-wave state for all reads, and its writes are buffered locally.
  3. Validate: After all transactions in the wave complete, we check for speculative conflicts. A speculative conflict occurs when the dependency analysis was too optimistic — for example, when a contract’s storage access pattern depends on runtime values that differ from what static analysis predicted.
  4. Commit or Rollback: If no speculative conflicts are detected, all write sets from the wave are merged into the global state atomically. If conflicts are detected, the conflicting transactions are rolled back and re-queued for the next wave.

The rollback mechanism leverages Zig’s arena allocator design. Rolling back a transaction means discarding its arena — all allocated memory is freed in O(1), and the write set is simply dropped. There is no undo log, no reverse-application of state changes. The pre-wave state snapshot is still valid because writes were buffered, not applied in place.

Step 5: Conflict Resolution

Speculative conflicts are rare in practice (under 2% of transactions in our benchmarks), but handling them correctly is essential for maintaining Ethereum-compatible execution semantics.

When a conflict is detected:

  1. The conflicting transactions are removed from the wave’s commit set.
  2. They are placed at the front of the pending pool for the next wave.
  3. Their dependency information is updated to reflect the newly discovered conflict.
  4. The non-conflicting transactions in the wave proceed with commit.

This means that a conflict does not invalidate the entire wave — only the specific conflicting transactions are rolled back. In the worst case, a single transaction might be rolled back multiple times before executing successfully, but this converges because each rollback adds a new edge to the dependency graph, eventually forcing the transaction into a wave where it has no conflicts (at worst, a wave containing only itself).

Architecture Diagram

The overall flow for a batch of N transactions:

Input: N transactions
    |
    v
[1. Dependency Analysis] -- O(n) read/write set scan
    |
    v
[2. Wave Construction] -- Greedy partitioning into parallel batches
    |
    v
[3. Wave 1 Execution] -- Work-stealing thread pool, speculative
    |
    v
[4. Conflict Check] -- Validate write sets for speculative conflicts
    |         \
    v          v
[Commit]   [Rollback conflicting txns -> re-queue]
    |
    v
[3. Wave 2 Execution] -- Next wave, including any rolled-back txns
    |
    ...
    v
[All waves complete] -- Final state committed

The EVM Implementation

Beyond parallel execution, Zig-EVM is a complete, specification-compliant EVM. The implementation covers the following areas.

141 Opcodes

Zig-EVM implements all 141 opcodes defined in the Ethereum Yellow Paper and subsequent EIPs through the Shanghai upgrade. This includes:

  • Arithmetic: ADD, MUL, SUB, DIV, SDIV, MOD, SMOD, ADDMOD, MULMOD, EXP, SIGNEXTEND
  • Comparison and bitwise: LT, GT, SLT, SGT, EQ, ISZERO, AND, OR, XOR, NOT, BYTE, SHL, SHR, SAR
  • Cryptographic: KECCAK256
  • Environmental: ADDRESS, BALANCE, ORIGIN, CALLER, CALLVALUE, CALLDATALOAD, CALLDATASIZE, CALLDATACOPY, CODESIZE, CODECOPY, GASPRICE, EXTCODESIZE, EXTCODECOPY, RETURNDATASIZE, RETURNDATACOPY, EXTCODEHASH
  • Block information: BLOCKHASH, COINBASE, TIMESTAMP, NUMBER, PREVRANDAO, GASLIMIT, CHAINID, SELFBALANCE, BASEFEE
  • Stack/memory/storage: POP, MLOAD, MSTORE, MSTORE8, SLOAD, SSTORE, JUMP, JUMPI, PC, MSIZE, GAS, JUMPDEST, PUSH1-PUSH32, DUP1-DUP16, SWAP1-SWAP16
  • Logging: LOG0-LOG4
  • System: CREATE, CALL, CALLCODE, RETURN, DELEGATECALL, CREATE2, STATICCALL, REVERT, SELFDESTRUCT

256-bit BigInt Arithmetic

The EVM operates on 256-bit unsigned integers. Zig-EVM implements a full BigInt type backed by four 64-bit limbs, with all arithmetic operations implemented in Zig without relying on external bignum libraries. Operations include addition, subtraction, multiplication, division, modular arithmetic, exponentiation, and signed variants.

The BigInt implementation uses comptime-optimized code paths. For example, multiplication of a 256-bit value by a value known at compile time to fit in 64 bits uses a specialized single-limb multiply path rather than the general 4x4 limb multiplication.

Gas Metering

Every opcode execution is preceded by a gas check. Gas costs follow the Ethereum specification, including dynamic costs for memory expansion (quadratic), storage access (cold/warm pricing per EIP-2929), and call depth limits. The gas metering is deterministic — given the same transaction and state, Zig-EVM will compute exactly the same gas usage as geth or any other conformant EVM.

Nested Execution Contexts

Zig-EVM supports CALL, DELEGATECALL, STATICCALL, and CALLCODE with proper context isolation. Each nested call creates a new execution frame with its own stack, memory, and program counter, while sharing (or not sharing, depending on the call type) the storage context and value transfer semantics defined by the Ethereum specification.

The maximum call depth is 1024, matching the Ethereum specification. Each nested frame uses a child arena allocator, so unwinding a failed nested call frees all memory allocated by that call and its children in O(1).


Benchmarks

All benchmarks were run on an AMD Ryzen 7 5800X (8 cores, 16 threads), 32GB DDR4-3200 RAM, NVMe SSD, Zig 0.14.0, -Doptimize=ReleaseFast. Each result is the median of 10 runs.

Workload Comparison (8 Threads)

WorkloadTxn CountSequentialParallelSpeedupConflict Rate
Simple ETH transfers1000970ms162ms5.98x0.0%
ERC-20 token transfers10001050ms185ms5.68x0.3%
DEX swaps (Uniswap-style)5001240ms245ms5.06x1.8%
Mixed workload10001100ms198ms5.56x0.9%
Heavy storage (NFT mints)500890ms168ms5.30x1.2%
Contract deployments200620ms118ms5.25x0.1%

Thread Scaling (1000 Simple Transfers)

ThreadsExecution TimeSpeedup vs SequentialSpeedup vs 1 Thread
1 (sequential)970ms1.00x1.00x
2498ms1.95x1.95x
4261ms3.72x3.72x
8162ms5.98x5.98x
16139ms6.98x6.98x

The sub-linear scaling beyond 8 physical cores (5.98x on 8 cores, 6.98x on 16 threads with SMT) is expected. Hyperthreads share execution units, so the marginal throughput per additional SMT thread is roughly 15-20% rather than 100%.

Batch Size Scaling (8 Threads, Simple Transfers)

Batch SizeSequentialParallelSpeedup
10097ms21ms4.62x
500485ms84ms5.77x
1000970ms162ms5.98x
50004850ms798ms6.08x
100009700ms1580ms6.14x

Parallel efficiency improves with batch size because the fixed overhead of dependency analysis and wave construction is amortized over more transactions. For small batches (<100 transactions), the overhead may exceed the parallel benefit, and sequential execution can be faster.


Comparison with Existing EVMs

FeatureZig-EVMreth (revm)gethevmoneBesu
LanguageZigRustGoC++Java
Parallel executionWave-based (5-6x)Block-level (experimental)NoNoPlanned
Embeddable via C ABIYes (native)Partial (extern "C")No (cgo overhead)YesNo (JVM)
Opcode count141141+141+141+141+
256-bit arithmeticNative (Zig BigInt)ruint crateGo big.Intintx libraryJava BigInteger
Gas meteringEthereum-compatibleEthereum-compatibleReference implEthereum-compatibleEthereum-compatible
Memory modelArena per txnGlobal allocatorGCnew/deleteGC
FFI bindingsC, Python, Rust, JSRust onlyGo onlyC, C++Java/JNI
Primary use caseEmbedding, L2 enginesFull nodeFull nodeBenchmarkingEnterprise nodes
LicenseMITMIT/Apache-2.0LGPL-3.0Apache-2.0Apache-2.0
Nested callsYes (1024 depth)YesYesYesYes

Key differentiators: Zig-EVM is the only implementation that combines parallel execution, native C ABI embeddability, and multi-language FFI bindings. reth’s revm is a high-quality Rust implementation but does not currently offer wave-based parallel execution or first-class C ABI embedding. geth is the reference implementation but Go’s garbage collector and cgo overhead make it unsuitable for embedding. evmone is embeddable via C++ but does not support parallel execution.


FFI Bindings: Embedding Zig-EVM in Any Language

Zig-EVM is designed as an embeddable execution engine. The core library compiles to a shared object (.so/.dylib/.dll) with a C ABI, and language-specific bindings wrap this interface.

C Binding (zigevm.h)

The C header provides direct access to all Zig-EVM functionality. This is the lowest-level binding and the foundation for all other language bindings.

#include "zigevm.h"
#include <stdio.h>

int main() {
    // Initialize the EVM with default configuration
    ZigEvmConfig config = zigevm_default_config();
    config.chain_id = 1;
    config.gas_limit = 30000000;
    config.parallel_threads = 8;

    ZigEvm *evm = zigevm_create(&config);
    if (!evm) {
        fprintf(stderr, "Failed to create EVM instance\n");
        return 1;
    }

    // Set up a simple ETH transfer
    ZigEvmTx tx = {
        .from = {0x01, 0x02, /* ... 20 bytes ... */},
        .to   = {0xAB, 0xCD, /* ... 20 bytes ... */},
        .value = zigevm_bigint_from_u64(1000000000000000000ULL), // 1 ETH
        .gas_limit = 21000,
        .data = NULL,
        .data_len = 0,
    };

    ZigEvmResult result = zigevm_execute(evm, &tx);
    if (result.success) {
        printf("Gas used: %llu\n", result.gas_used);
    } else {
        printf("Execution failed: %s\n", result.error);
    }

    // Batch execution with parallel processing
    ZigEvmTx batch[1000];
    // ... populate batch ...

    ZigEvmBatchResult batch_result = zigevm_execute_batch(
        evm, batch, 1000, ZIGEVM_PARALLEL
    );
    printf("Batch: %d succeeded, %d failed, %llu ms\n",
           batch_result.success_count,
           batch_result.failure_count,
           batch_result.elapsed_ms);

    zigevm_destroy(evm);
    return 0;
}

Compile with:

gcc -o my_app my_app.c -lzigevm -L/path/to/zig-evm/lib

Python Binding (ctypes)

The Python binding wraps the C shared library using ctypes, providing a Pythonic API with context managers and batch execution support.

from zigevm import EVM, BatchExecutor, BigInt

# Initialize the EVM
evm = EVM(chain_id=1, gas_limit=30_000_000)

# Execute a single transaction
result = evm.execute(
    sender="0x0102030405060708091011121314151617181920",
    to="0xABCDABCDABCDABCDABCDABCDABCDABCDABCDABCD",
    value=BigInt.from_eth(1.0),
    gas_limit=21000,
)
print(f"Gas used: {result.gas_used}")
print(f"Success: {result.success}")

# Batch execution with parallel processing
with BatchExecutor(evm, threads=8) as batch:
    for i in range(1000):
        batch.add(
            sender=senders[i],
            to=receivers[i],
            value=BigInt.from_wei(amounts[i]),
            gas_limit=21000,
        )
    results = batch.execute()

print(f"Executed {len(results)} txns in {batch.elapsed_ms}ms")
print(f"Speedup: {batch.sequential_ms / batch.elapsed_ms:.2f}x")

Install via pip:

pip install zigevm

Rust Binding (Crate)

The Rust crate provides a safe wrapper around the C FFI with optional ethers and serde feature flags for ecosystem integration.

use zigevm::{Evm, EvmConfig, Transaction, BigInt, ExecutionMode};

fn main() -> Result<(), zigevm::Error> {
    // Initialize with builder pattern
    let evm = Evm::new(
        EvmConfig::builder()
            .chain_id(1)
            .gas_limit(30_000_000)
            .parallel_threads(8)
            .build()
    )?;

    // Single transaction execution
    let tx = Transaction {
        from: "0x0102...1920".parse()?,
        to: Some("0xABCD...ABCD".parse()?),
        value: BigInt::from_eth(1),
        gas_limit: 21_000,
        data: Vec::new(),
    };

    let result = evm.execute(&tx)?;
    println!("Gas used: {}", result.gas_used);

    // Batch parallel execution
    let transactions: Vec<Transaction> = (0..1000)
        .map(|i| Transaction {
            from: senders[i],
            to: Some(receivers[i]),
            value: BigInt::from_wei(amounts[i]),
            gas_limit: 21_000,
            data: Vec::new(),
        })
        .collect();

    let batch_result = evm.execute_batch(&transactions, ExecutionMode::Parallel)?;
    println!(
        "Batch: {} succeeded in {}ms (speedup: {:.2}x)",
        batch_result.success_count,
        batch_result.elapsed_ms,
        batch_result.speedup,
    );

    Ok(())
}

Add to Cargo.toml:

[dependencies]
zigevm = { version = "0.1", features = ["ethers", "serde"] }

JavaScript Binding (N-API)

The JavaScript binding uses N-API for native Node.js integration, with async batch execution that does not block the event loop.

const { EVM, BatchExecutor, BigInt } = require('zigevm');

async function main() {
    // Initialize the EVM
    const evm = new EVM({
        chainId: 1,
        gasLimit: 30_000_000,
        parallelThreads: 8,
    });

    // Single transaction
    const result = evm.execute({
        from: '0x0102030405060708091011121314151617181920',
        to: '0xABCDABCDABCDABCDABCDABCDABCDABCDABCDABCD',
        value: BigInt.fromEth(1.0),
        gasLimit: 21000,
    });
    console.log(`Gas used: ${result.gasUsed}`);

    // Async batch execution (non-blocking)
    const batch = new BatchExecutor(evm);
    for (let i = 0; i < 1000; i++) {
        batch.add({
            from: senders[i],
            to: receivers[i],
            value: BigInt.fromWei(amounts[i]),
            gasLimit: 21000,
        });
    }

    const batchResult = await batch.execute();
    console.log(`Executed ${batchResult.successCount} txns in ${batchResult.elapsedMs}ms`);
    console.log(`Speedup: ${batchResult.speedup.toFixed(2)}x`);
}

main().catch(console.error);

Install via npm:

npm install zigevm

Use Cases

L2 and Rollup Execution Engines

The primary use case for Zig-EVM is as a drop-in execution engine for L2 rollups. A rollup sequencer receives a batch of transactions, orders them, and needs to execute them to produce a state root. This execution is the throughput bottleneck for most rollup architectures.

Zig-EVM can replace the execution component while maintaining full EVM compatibility. The sequencer passes a batch of transactions to zigevm_execute_batch, receives the resulting state changes, and produces the state root. The 5-6x throughput improvement directly translates to either higher transaction throughput (more transactions per block) or lower block times (same transactions, faster finality).

For optimistic rollups, faster execution also means faster fraud proof generation. A verifier re-executing a disputed batch can complete the verification in a fraction of the time required by a sequential EVM.

For ZK rollups, Zig-EVM can serve as the “native execution” engine that runs alongside the ZK proving pipeline. While the ZK prover generates proofs asynchronously, Zig-EVM provides fast execution for state queries, gas estimation, and transaction simulation.

The FFI bindings make integration practical regardless of the rollup’s primary language. A Go-based sequencer uses the C binding. A Rust-based sequencer uses the Rust crate. The execution semantics are identical in every case.

EVM Research and Experimentation

Zig-EVM’s codebase is designed for readability and modification. The explicit allocator pattern and absence of hidden control flow make it straightforward to add new opcodes, modify gas costs, or experiment with alternative execution models.

Researchers can fork Zig-EVM to prototype EIPs (Ethereum Improvement Proposals) before they are implemented in production clients. The parallel execution infrastructure makes it possible to evaluate the parallelism characteristics of proposed EVM changes — for example, testing whether a new opcode would increase or decrease the conflict rate in parallel execution.

The comptime opcode dispatch table means adding a new opcode is a matter of adding an entry to a compile-time array and implementing the handler function. The build system validates gas cost consistency at compile time, catching specification errors before they reach tests.

Transaction Simulation and Gas Estimation

Applications such as wallets, block explorers, and MEV searchers need to simulate transaction execution without committing state changes. Zig-EVM’s speculative execution model is naturally suited to this use case — it already implements “execute and optionally discard” as a core primitive.

Batch simulation is particularly valuable for MEV research. A searcher can simulate thousands of transaction orderings in parallel to find optimal bundle compositions. With Zig-EVM, simulating 1000 transaction orderings takes ~162ms instead of ~970ms, enabling more exploration within the same time budget.

Gas estimation benefits from the same architecture. Estimating gas for a batch of pending transactions can be parallelized across the batch, returning results for all transactions in the time it takes to estimate one.

Blockchain Indexing and Data Pipelines

Indexers that need to re-execute historical blocks to extract events, trace calls, or compute derived state benefit directly from parallel execution. An indexer processing the Ethereum archive from genesis can re-execute blocks 5-6x faster than with a sequential EVM.

The Python binding is particularly relevant here. Many indexing pipelines are written in Python (using tools such as web3.py or similar frameworks). Replacing the execution backend with Zig-EVM via the Python binding provides a performance improvement without rewriting the pipeline.


Limitations and Known Constraints

Zig-EVM is a focused tool, not a full Ethereum client. Understanding its boundaries is important for evaluating fit.

No networking layer: Zig-EVM does not connect to the Ethereum P2P network, does not sync blocks, and does not maintain a state database. It is a pure execution engine. You provide the state and transactions; it returns the execution results.

No state storage: Zig-EVM operates on in-memory state provided by the caller. It does not implement a Merkle Patricia Trie, does not persist state to disk, and does not compute state roots. The caller is responsible for state management. This is by design — it allows Zig-EVM to be embedded in systems that use any state storage backend.

Static analysis limitations: The dependency analysis relies on static analysis of bytecode to predict storage access patterns. Transactions with highly dynamic storage access (e.g., contracts that compute storage keys from runtime inputs like block timestamps or oracle prices) may be conservatively serialized, reducing parallel speedup for those specific transactions. The conflict rate for such workloads is typically 1-3%, resulting in a modest reduction in speedup but not a correctness issue.

No EVM precompiles (partial): Common precompiles (ecrecover, SHA-256, RIPEMD-160, identity, modexp, ecAdd, ecMul, ecPairing) are implemented. Less common precompiles from recent EIPs may be missing. Check the documentation for the current list.

Zig ecosystem maturity: Zig is a younger language than Rust or Go. The ecosystem of libraries, tooling, and developer familiarity is smaller. This is a consideration for teams evaluating long-term maintenance. However, Zig-EVM’s FFI bindings mean that consumers of the library do not need Zig expertise — they interact through C, Python, Rust, or JavaScript APIs.

Single-machine parallelism only: Zig-EVM’s parallel execution is multi-threaded, not distributed. It does not support spreading execution across multiple machines. For workloads that exceed single-machine capacity, the caller must implement batch partitioning.

No Cancun/Dencun opcodes yet: The current implementation covers opcodes through the Shanghai upgrade. EIP-4844 blob transaction support and other Cancun changes are on the roadmap but not yet implemented.


Getting Started

Building from Source

git clone https://github.com/cryptuon/zig-evm
cd zig-evm
zig build -Doptimize=ReleaseFast

This produces the shared library (libzigevm.so / libzigevm.dylib), the C header (zigevm.h), and the command-line test runner.

Running Benchmarks

zig build bench -Doptimize=ReleaseFast

This runs the full benchmark suite and prints results for sequential vs. parallel execution across all workload types.

Installing Language Bindings

# Python
pip install zigevm

# Rust (add to Cargo.toml)
# zigevm = { version = "0.1", features = ["ethers", "serde"] }

# JavaScript
npm install zigevm

Frequently Asked Questions

Is Zig-EVM a full Ethereum node?

No. Zig-EVM is a pure execution engine. It does not handle networking, block synchronization, state storage, consensus, or peer discovery. It takes transactions and state as input and produces execution results as output. It is designed to be embedded in systems that handle those other responsibilities — such as rollup sequencers, indexers, or testing frameworks.

Does parallel execution change the execution semantics?

No. Zig-EVM’s parallel execution produces exactly the same results as sequential execution. The dependency analysis ensures that only independent transactions execute in parallel. When speculative conflicts are detected, the conflicting transactions are rolled back and re-executed in the correct order. The final state is identical to what a sequential EVM would produce, and gas accounting is preserved.

How does Zig-EVM compare to Block-STM (used in Aptos)?

Block-STM is an optimistic parallel execution model where transactions execute speculatively and are validated after execution. Zig-EVM’s wave-based approach performs dependency analysis before execution, which avoids most speculative conflicts. The tradeoff: Zig-EVM’s pre-execution analysis adds latency for the dependency scan but reduces wasted work from rollbacks. Block-STM can start execution immediately but may perform more re-executions. For EVM workloads with predictable access patterns (the common case), Zig-EVM’s approach tends to be more efficient because the dependency analysis is O(n) while re-execution costs are proportional to transaction complexity.

Can I use Zig-EVM in production?

Zig-EVM is designed for production embedding in L2 execution engines, simulation pipelines, and testing infrastructure. It has been tested against the Ethereum test suite for opcode correctness and gas metering. However, as with any new EVM implementation, we recommend thorough integration testing against your specific workload before deploying to a production chain. If you are building an L2 and want to evaluate Zig-EVM as your execution engine, contact us for integration support.

What happens if all transactions in a batch conflict with each other?

Zig-EVM degrades gracefully to sequential execution. If every transaction conflicts with every other transaction, the wave construction produces N waves of 1 transaction each, and execution proceeds sequentially. The overhead of the dependency analysis (~1-2ms for 1000 transactions) is the only cost beyond sequential execution. There is no performance cliff or failure mode — just reduced parallelism.

Why not just use revm (Rust)?

revm is an excellent EVM implementation and the execution engine behind reth. For teams already building in Rust, revm is a strong choice. Zig-EVM targets a different point in the design space: native C ABI embeddability without FFI friction, multi-language bindings as a first-class feature, and built-in parallel execution. If your sequencer is written in Python, JavaScript, or Go, Zig-EVM provides a lower-friction integration path than wrapping revm. If you need parallel execution today rather than waiting for it to be added to reth, Zig-EVM delivers it now.


What We Are Building Next

Zig-EVM is actively developed. Upcoming work includes EIP-4844 blob transaction support, distributed parallel execution across multiple machines, a built-in state trie for standalone operation, and expanded precompile coverage.

We are looking for L2 teams to integrate Zig-EVM as their execution engine and validate the parallel speedup on production workloads. We are also interested in collaborations with EVM research groups experimenting with new opcode designs and gas models.

If you are building infrastructure that would benefit from a faster, embeddable EVM, explore our solutions or reach out directly.

Zig-EVM is open source under the MIT license: github.com/cryptuon/zig-evm.

DS

Dipankar Sarkar

Founder, Cryptuon

Blockchain researcher and systems engineer. Author of 5 published papers on cross-chain composability, MEV mitigation, and DePIN protocols. Building production blockchain infrastructure in Rust and Zig.

Build on research-grade infrastructure

Ready to deploy smart contracts across chains, execute atomic cross-rollup swaps, or protect your validators from slashing?