nklave Validators Staking Security

How to Prevent Validator Slashing: A Policy-Enforcement Approach

Slashing has cost validators millions. nklave adds a trust boundary between your validator client and signing keys that enforces policy before every signature.

DS
Dipankar Sarkar
25 min read 4,972 words

Validator slashing is prevented by inserting a policy-enforcement layer between the consensus client and signing keys that rejects any message violating protocol rules before the key ever signs it. nklave is an open-source tool that acts as this trust boundary — it intercepts every signing request, checks it against slashing-prevention policies (double-vote detection, surround-vote detection, slot validation), and only forwards safe requests to the key material. It is Web3Signer-compatible, works with all major Ethereum consensus clients, and supports both Ethereum BLS and Cosmos Ed25519 keys.

TL;DR

  • Slashing is the protocol-level penalty for validators that sign conflicting messages. It destroys a portion of your staked ETH and forcibly exits you from the validator set.
  • The root cause is almost never malicious intent — it is misconfigured failovers, clock drift, duplicate validator instances, and infrastructure bugs.
  • The missing layer in today’s validator stack is a policy-enforcement boundary between the consensus client and the signing keys. Without it, any malformed or conflicting request from the client results in an immediate, irreversible penalty.
  • nklave fills that gap. It is a Rust-based, Web3Signer-compatible signing proxy that enforces anti-slashing rules, maintains rollback-resistant checkpoints, and writes every decision to an append-only audit log.
  • Cost: MIT licensed, open source, $0 in licensing. The cost of not running it is potentially millions in slashed stake.
  • View all Cryptuon solutions | Contact us for deployment support

What Is Validator Slashing?

Slashing is a punitive mechanism built into Proof-of-Stake consensus protocols. When a validator signs messages that violate the protocol’s safety rules, the network destroys a portion of that validator’s staked collateral and forcibly ejects them from the active validator set.

On Ethereum, slashing was introduced with the Beacon Chain (Phase 0) as part of the Casper FFG finality gadget. It serves two purposes: making attacks economically irrational, and providing automated enforcement that does not rely on social consensus or governance votes.

The slashed validator loses a minimum of 1/32 of their effective balance immediately. An additional “correlation penalty” scales based on how many other validators were slashed in the same time window — up to the full effective balance for large coordinated events. The validator is then queued for forced exit over approximately 36 days (8,192 epochs), during which they continue to lose stake through inactivity penalties.

The Three Types of Slashable Offenses on Ethereum

Ethereum’s consensus protocol defines exactly three conditions under which a validator can be slashed. Understanding these precisely is essential to preventing them.

1. Proposer Double-Vote (Double Block Proposal)

A validator is slashed if it signs two different beacon block proposals for the same slot. This is defined in the Beacon Chain specification under is_slashable_block_header:

# A proposer is slashable if they sign two different blocks for the same slot
slot_1 == slot_2 AND root_1 != root_2

This most commonly occurs when a validator operator runs two instances of the same validator simultaneously — for example, during a migration where the old instance was not fully shut down, or in a failover configuration where both the primary and backup become active.

2. Attester Double-Vote

A validator is slashed if it makes two different attestations for the same target epoch. The specification defines this in is_slashable_attestation_data:

# Double vote: same target epoch, different attestation data
attestation_1.target.epoch == attestation_2.target.epoch AND
attestation_1 != attestation_2

Like proposer double-votes, this typically results from running duplicate validator instances. Each instance independently creates an attestation for the same epoch, resulting in two conflicting signed messages.

3. Surround Vote

A surround vote occurs when a validator makes an attestation that “surrounds” a previous attestation — that is, the new attestation’s source is earlier and its target is later than an existing attestation, or vice versa. The specification defines this as:

# Surround vote: attestation_1 surrounds attestation_2, or vice versa
(attestation_1.source.epoch < attestation_2.source.epoch AND
 attestation_2.target.epoch < attestation_1.target.epoch)
OR
(attestation_2.source.epoch < attestation_1.source.epoch AND
 attestation_1.target.epoch < attestation_2.target.epoch)

Surround votes are the most insidious slashing condition because they can result from subtle state corruption. If a validator’s slashing protection database is lost or rolled back — due to a disk failure, a restore from an old backup, or a database corruption event — the validator may sign an attestation with a source epoch that is earlier than its most recent attestation, creating a surround vote.

Slashing in Cosmos-Based Networks

Cosmos SDK chains implement slashing through the x/slashing module. Validators are slashed for double-signing (equivocation) at typically 5% of bonded tokens, and the validator is permanently “tombstoned” — removed from the active set with no ability to re-join with the same key. Downtime penalties are usually 0.01% with a temporary jailing period.

The consequences in Cosmos are compounded by the delegation model: every delegator who staked with a slashed validator takes a proportional loss. A single equivocation event can affect thousands of delegators and destroy the validator’s reputation permanently.


Real-World Slashing Incidents

Slashing is not theoretical. Hundreds of validators have been slashed on Ethereum mainnet, and the overwhelming majority were not attempting to attack the network.

Staked.us Infrastructure Incident (February 2023)

In February 2023, Staked.us (a subsidiary of Kraken) experienced an infrastructure issue that resulted in approximately 75 validators being slashed simultaneously. The incident was caused by a configuration error during an infrastructure migration that led to duplicate validator instances running at the same time.

The immediate slashing penalty for each validator was approximately 1 ETH (1/32 of the 32 ETH effective balance). However, because 75 validators were slashed in close temporal proximity, the correlation penalty amplified the losses. The total cost of the incident was estimated at over 100 ETH in direct penalties, plus the opportunity cost of forced exits and the reputational damage to the staking provider.

This incident demonstrated that even well-funded, professional staking operations are vulnerable to operational slashing — not from incompetence, but from the inherent complexity of managing distributed validator infrastructure at scale.

Medalla Testnet Mass Slashing (August 2020)

Before Ethereum’s mainnet Beacon Chain launch, the Medalla testnet experienced a mass slashing event that affected thousands of validators. A single time-synchronization issue with the Prysm client caused a cascading failure: Prysm nodes, which represented the majority of the testnet’s validators, began producing attestations with incorrect timestamps.

When the issue was identified and operators attempted to fix it by restarting their clients, many did so without properly preserving their slashing protection databases. The restarted clients then signed attestations that conflicted with their pre-crash attestations, triggering surround-vote slashings.

The Medalla incident was a watershed moment for the Ethereum staking community. It revealed that clock synchronization failures, client restarts, and slashing protection database management were all critical operational concerns — not edge cases.

Early Mainnet Slashings (December 2020 - March 2021)

In the first months after Beacon Chain genesis, a steady stream of validators were slashed. The most common causes were all operational: migration failures (not fully stopping the old instance), active-active failover misconfigurations, cloud orchestration spinning up duplicate containers, and operators accidentally importing mainnet keys into testnet configurations.

The Cumulative Cost

As of early 2026, over 400 validators have been slashed on Ethereum mainnet. The cumulative cost exceeds $10 million. On Cosmos-based networks, where tombstoning is permanent and delegator losses compound the impact, the aggregate damage is even higher.

The pattern is consistent: the vast majority of slashing events are caused by operational failures, not malicious intent.


The Missing Layer in Validator Architecture

To understand why slashing remains a persistent problem, consider the typical validator architecture:

Beacon Node ← Consensus Layer (chain data, fork choice)

Validator Client (Lighthouse / Teku / Prysm / Lodestar)

Signing Keys (local keystore / remote signer)

Signed Message → Network

In this architecture, the validator client decides what to sign and the keys sign it. There is a built-in slashing protection database (often called the “anti-slashing DB”) inside the validator client itself, but this mechanism has several critical weaknesses:

  1. It is coupled to the client. If the client crashes and the database is corrupted or lost, the protection is gone. A fresh client instance has no memory of what was previously signed.

  2. It is not portable. Migrating from one client to another (e.g., from Prysm to Lighthouse for client diversity) requires carefully exporting and importing the slashing protection database in EIP-3076 interchange format. Errors in this process are a common cause of slashing.

  3. It provides no audit trail. The built-in slashing protection does not log why a request was rejected or provide compliance-grade records of signing activity.

  4. It does not enforce organizational policies. An institutional staking operation may want to enforce rules beyond basic slashing prevention — for example, restricting which slots a validator should sign for, or requiring that all signing activity be logged to an external system.

  5. It is a single point of failure. If a bug in the validator client bypasses the built-in slashing protection (which has happened), there is no secondary check before the key signs the malicious message.

What is missing is an independent, external policy-enforcement boundary that sits between the validator client and the signing keys. This boundary should be stateful (it maintains its own record of what has been signed), independent (it runs as a separate process with its own persistence), and auditable (it logs every decision).


nklave: Policy-Enforcing Trust Boundary for PoS Validators

nklave is the layer that fills this gap. Built by Cryptuon, it is a Rust-based signing proxy that interposes between validator clients and key material, enforcing slashing-prevention policies on every signing request. The core design principle: the signing key should never see a request that violates protocol safety rules.

Architecture

Beacon Node

Validator Client (Lighthouse / Teku / Prysm / Lodestar)

  nklave (policy enforcement + audit)

Signing Keys (local keystore / HSM / remote signer)

Signed Message → Network

nklave runs as a standalone service that exposes a Web3Signer-compatible REST API. The validator client is configured to use nklave as its remote signer (the same way it would be configured to use Web3Signer). nklave receives signing requests, evaluates them against its policy engine, and either forwards approved requests to the actual key material or rejects them with a detailed error.

The Six Crates

nklave is structured as a Rust workspace of six crates, each with a focused responsibility:

  • nklave-core: The policy engine. Contains the slashing-prevention logic, state machine, and rule evaluation framework.
  • nklave-api: The Web3Signer-compatible HTTP API layer. Handles request parsing, routing, and response formatting.
  • nklave-store: The persistence layer. Manages rollback-resistant checkpoints and the append-only audit log.
  • nklave-keys: Key management. Handles loading, caching, and interfacing with key material (local keystores, HSMs, remote signers).
  • nklave-dashboard: The embedded Vue.js monitoring dashboard. Provides real-time visibility into signing activity, policy violations, and system health.
  • nklave-cli: Command-line interface for configuration, key import, database management, and operational tasks.

How Signing Request Interception Works

When a validator client sends a signing request to nklave, the following sequence executes:

Step 1: Request Parsing. nklave parses the incoming HTTP request according to the Web3Signer API specification. It extracts the signing request type (block proposal, attestation, aggregate attestation, sync committee contribution, etc.) and the associated data.

Step 2: Validator Identification. The request includes the public key of the validator. nklave looks up the validator’s signing history from its internal state store.

Step 3: Policy Evaluation. The request is evaluated against all active policies. For standard slashing prevention, this includes:

  • Double-vote check: Has this validator already signed a different block for this slot (proposals) or a different attestation for this target epoch (attestations)?
  • Surround-vote check: Would this attestation surround or be surrounded by any previously signed attestation? nklave maintains a complete history of source/target epoch pairs and checks every new attestation against this history.
  • Slot validation: Is the requested slot/epoch within a reasonable range of the current time? This catches clock-drift issues before they result in problematic signatures.
  • Source epoch monotonicity: Is the attestation’s source epoch at least as recent as the most recent source epoch we have signed for? This prevents rollback-induced surround votes.

Step 4: Decision. If all policies pass, nklave forwards the signing request to the configured key backend and returns the resulting signature to the validator client. If any policy fails, nklave returns an HTTP error response with a detailed explanation of which policy was violated and why.

Step 5: State Update. Regardless of the decision, nklave updates its internal state. Approved requests update the validator’s signing history (latest slot, latest source/target epochs). All requests — approved and rejected — are written to the append-only audit log.

Step 6: Checkpoint. Periodically, nklave writes a rollback-resistant checkpoint of its state. These checkpoints are designed to survive process crashes, disk failures, and restore-from-backup scenarios. The checkpoint mechanism ensures that even if nklave is restarted from an older state, it will not approve requests that would conflict with previously signed messages.

The Append-Only Audit Log

Every signing request that passes through nklave is recorded in an append-only log. Each log entry includes:

  • Timestamp (UTC, nanosecond precision)
  • Validator public key
  • Request type (proposal, attestation, etc.)
  • Request details (slot, epoch, source, target, block root)
  • Decision (approved / rejected)
  • Policy evaluation results (which policies were checked, which passed, which failed)
  • Signing latency

The log is append-only by design: entries can be added but never modified or deleted. This provides a tamper-evident record suitable for compliance audits, incident investigation, and operational analytics. For institutional operations subject to regulatory requirements, it serves as evidence that appropriate controls were in place and functioning.

Rollback-Resistant Checkpoints

One of the most dangerous failure modes in validator operations is state rollback. If a validator’s slashing protection database is restored from a backup that predates recent signing activity, the validator may re-sign messages that conflict with messages signed after the backup was taken.

nklave addresses this with rollback-resistant checkpoints. The checkpoint mechanism works by writing state to multiple independent storage locations and using a monotonically increasing sequence number. When nklave starts up, it reads from all available checkpoint locations, identifies the most recent valid state, and refuses to operate if it detects that its state may have been rolled back.

This is a critical safety property: it is better for nklave to refuse to start (causing missed attestations and a small inactivity penalty) than to start with stale state (risking a slashing event that destroys 1/32 or more of the staked balance).

The Monitoring Dashboard

nklave includes an embedded Vue.js dashboard served directly by the nklave process — no separate web server or frontend deployment required. It provides:

  • Signing activity feed: Live stream of all signing requests with approve/reject status
  • Policy violation alerts: Highlighted rejected requests with detailed failure explanations
  • Validator health overview: Per-validator status showing latest signed slot and anomalies
  • Historical analytics: Signing volume, rejection rates, and latency over time
  • System health: Process metrics, checkpoint status, and audit log integrity

The dashboard is accessible via a configurable port and can be placed behind an authentication proxy for production use.


Comparison: nklave vs. Other Signing Solutions

The following table compares nklave against other tools in the validator signing stack.

FeaturenklaveWeb3SignerDirkVouchRaw Signing (Local Keystore)
Slashing preventionFull policy engine (double-vote, surround-vote, slot validation, source monotonicity)Basic slashing protection DBThreshold signing with built-in slashing protectionDelegates to signer; provides strategy-level deduplicationClient-built-in slashing protection DB only
Policy enforcementConfigurable rule engine with custom policiesNone beyond slashing protectionNone beyond slashing protectionStrategy-based validator selection, no signing-level policiesNone
Append-only audit logYes, tamper-evident, nanosecond precisionNoNoLogging available but not append-only or tamper-evidentNo
Rollback-resistant checkpointsYes, multi-location with sequence numbersNo (relies on standard DB)NoNoNo (relies on client DB)
Multi-chain supportEthereum BLS12-381, Cosmos Ed25519Ethereum BLS12-381, Filecoin (limited)Ethereum BLS12-381 onlyEthereum onlyDepends on client
Web3Signer API compatibilityFull compatibility (drop-in replacement)NativeNot compatible (gRPC-based)Not a signer (orchestration layer)Not applicable
Monitoring dashboardEmbedded Vue.js dashboardMetrics endpoint (Prometheus)Metrics endpointMetrics and loggingClient-dependent
Consensus client supportLighthouse, Teku, Prysm, LodestarLighthouse, Teku, Prysm, LodestarPrimarily VouchWorks with VouchClient-specific
ArchitectureSigning proxy (sits between client and keys)Remote signer (holds keys)Distributed key manager (threshold signing)Validator orchestrator (sits above client)Keys managed by client directly
LicenseMIT (open source)Apache 2.0 (open source)Apache 2.0 (open source)Apache 2.0 (open source)Client-dependent

Key distinctions:

  • Web3Signer is a remote signer, not a policy engine. It signs what it is told to sign. nklave can sit in front of Web3Signer, adding policy enforcement while Web3Signer manages key material.

  • Dirk is a distributed key manager implementing threshold signing. It uses a gRPC API incompatible with Web3Signer. Best for operators who want threshold signing; nklave is best for policy enforcement.

  • Vouch is an orchestration layer above the validator client. It is complementary to nklave — Vouch handles strategy, nklave handles signing safety.

  • Raw local signing relies entirely on the client’s built-in slashing protection DB. It provides basic protection but lacks auditability, rollback resistance, and configurability.


Setup Guide: Deploying nklave with Lighthouse and Teku

Prerequisites

  • Rust 1.75+ (for building from source)
  • A running Beacon Node (any client)
  • A validator client (Lighthouse or Teku in this guide)
  • Validator keys (BLS12-381 keystores)

Step 1: Build nklave

git clone https://github.com/cryptuon/nklave
cd nklave
cargo build --release

The compiled binary will be at target/release/nklave.

Step 2: Initialize Configuration

./target/release/nklave init --config-dir /etc/nklave

This creates a default configuration file at /etc/nklave/config.toml and initializes the checkpoint and audit log directories.

Step 3: Import Validator Keys

./target/release/nklave keys import \
  --config-dir /etc/nklave \
  --keystore-dir /path/to/validator_keys \
  --password-file /path/to/password.txt

If you are migrating from an existing setup that has signing history, import the EIP-3076 slashing protection interchange file:

./target/release/nklave slashing-protection import \
  --config-dir /etc/nklave \
  --file /path/to/interchange-export.json

Step 4: Start nklave

./target/release/nklave run \
  --config-dir /etc/nklave \
  --listen-address 127.0.0.1 \
  --listen-port 9000 \
  --dashboard-port 8080

nklave is now listening for Web3Signer-compatible signing requests on 127.0.0.1:9000 and serving the monitoring dashboard on port 8080.

Step 5a: Configure Lighthouse to Use nklave

Edit your Lighthouse validator client startup command to point to nklave as the remote signer:

lighthouse vc \
  --beacon-nodes http://localhost:5052 \
  --web3-signer-url http://127.0.0.1:9000 \
  --suggested-fee-recipient 0xYOUR_FEE_RECIPIENT

Remove any --datadir flags that point to local keystores if you want nklave to be the sole key manager. Lighthouse will now send all signing requests to nklave instead of signing locally.

Step 5b: Configure Teku to Use nklave

For Teku, configure the remote signer in the validator configuration:

teku validator-client \
  --beacon-node-api-endpoint=http://localhost:5051 \
  --validators-external-signer-url=http://127.0.0.1:9000 \
  --validators-external-signer-public-keys=0xYOUR_VALIDATOR_PUBKEY_1,0xYOUR_VALIDATOR_PUBKEY_2

Alternatively, set these in Teku’s YAML configuration file:

validators-external-signer-url: "http://127.0.0.1:9000"
validators-external-signer-public-keys:
  - "0xYOUR_VALIDATOR_PUBKEY_1"
  - "0xYOUR_VALIDATOR_PUBKEY_2"

Step 6: Verify

Open the nklave dashboard at http://localhost:8080 and confirm that signing requests are flowing through. You should see attestation requests appearing every epoch (6.4 minutes on Ethereum mainnet) and block proposal requests whenever your validator is selected to propose.

Check the audit log to confirm that requests are being evaluated and recorded:

./target/release/nklave logs tail --config-dir /etc/nklave --count 20

Security Model: What nklave Protects Against (and What It Does Not)

Attack Vectors nklave Mitigates

Duplicate validator instances. The most common cause of slashing. If two validator clients are accidentally running with the same keys, both will send signing requests to nklave. nklave will approve the first valid request and reject any conflicting subsequent requests. This is the single highest-value protection nklave provides.

State rollback after infrastructure failure. If a validator client crashes and restarts with a stale slashing protection database (or no database at all), it may attempt to sign messages that conflict with previously signed messages. nklave’s independent state and rollback-resistant checkpoints prevent this.

Clock synchronization issues. If a validator client’s system clock drifts, it may request signatures for slots that are in the past or far in the future. nklave’s slot validation policy catches these anomalies.

Client bugs that bypass built-in slashing protection. Validator client software is complex and occasionally has bugs. If a bug in the client’s slashing protection logic allows a conflicting message to be constructed, nklave serves as an independent second check that catches the conflict.

Unauthorized signing activity. The audit log provides a complete record of all signing requests, enabling detection and investigation of anomalous activity.

Migration errors. When moving validators between clients or machines, the EIP-3076 interchange process is error-prone. nklave’s independent state persists across client migrations, providing continuous protection.

What nklave Does Not Protect Against

Key compromise. If an attacker gains access to the raw key material outside of nklave, they can sign without nklave’s involvement. nklave protects against accidental conflicting signatures routed through it, not key exfiltration.

Process compromise. If an attacker gains root access to the nklave host and modifies the binary or state database, policy enforcement can be bypassed. nklave should run in a hardened environment with appropriate access controls.

Liveness failures. If nklave crashes, validators will miss attestations (incurring small inactivity penalties). However, inactivity penalties are orders of magnitude smaller than slashing penalties, so a nklave outage is far preferable to a slashing event.

Consensus-layer attacks. nklave does not validate the beacon node’s fork choice or chain data. If the beacon node provides incorrect data (e.g., due to an eclipse attack), this is a beacon-node-level concern outside nklave’s scope.

Protocol-level vulnerabilities. nklave operates at the validator signing layer and does not protect against vulnerabilities in the consensus protocol itself.


ROI Calculation for Institutional Staking Operations

The economic argument for deploying nklave is straightforward. Here is a concrete model for an institutional staking operation.

Assumptions

  • Validators managed: 500
  • ETH staked per validator: 32 ETH
  • Total ETH staked: 16,000 ETH
  • ETH price: $3,000 (conservative estimate)
  • Total value staked: $48,000,000

Cost of a Slashing Event

Minimum penalty (single validator): 1 ETH ($3,000). This is the initial slashing penalty of 1/32 of the effective balance.

Correlation penalty (infrastructure incident affecting 50 validators): If 50 of your 500 validators are slashed in the same incident (as happened in the Staked.us case), the correlation penalty amplifies losses. With 50 validators slashed in a window where total network slashings are low, the correlation penalty can reach 2-3 ETH per validator beyond the initial penalty. Total direct loss: approximately 150-200 ETH ($450,000-$600,000).

Forced exit opportunity cost: Slashed validators are queued for forced exit over approximately 36 days. During this period, they earn no rewards and continue to lose small amounts to inactivity penalties. For 50 validators, this represents approximately 36 days of lost staking rewards: roughly 5-10 ETH ($15,000-$30,000).

Reputational cost: For institutional staking providers, a slashing event affects client confidence and may trigger redemptions. This cost is difficult to quantify but is often the largest component. Enterprise clients evaluating staking providers routinely ask about slashing prevention measures.

Total estimated cost of a single infrastructure incident: $500,000-$700,000 in direct penalties and opportunity costs, plus unquantifiable reputational damage.

Cost of Deploying nklave

  • Software licensing: $0 (MIT licensed, open source)
  • Infrastructure: nklave runs on the same machines as your validator clients. Additional resource overhead is minimal — a few hundred MB of RAM and negligible CPU.
  • Engineering time: Initial setup takes 2-4 hours for an experienced operator. Ongoing maintenance is minimal.
  • Estimated annual operational cost: Under $5,000 in engineering time for setup, monitoring, and updates.

ROI

Even if the probability of a slashing incident in any given year is only 5% (which is conservative for operations managing 500+ validators), the expected annual loss is:

5% × $600,000 = $30,000 expected annual loss

Against an annual operational cost of under $5,000, nklave delivers a 6x return on investment from risk reduction alone — before accounting for the compliance value of the audit log and the operational visibility provided by the dashboard.

For operations with higher validator counts or in environments where the probability of operational errors is higher (multi-region deployments, frequent migrations, client-diversity rotations), the ROI is even more favorable.


Limitations

No tool eliminates all risk. It is important to understand nklave’s boundaries.

  1. Single-machine deployment. In the standard configuration, nklave runs as a single process on a single machine. If that machine fails, validators cannot sign. High-availability deployment patterns (active-passive failover with shared state) are possible but require additional operational complexity.

  2. Latency overhead. nklave adds a small amount of latency to every signing request (policy evaluation + audit log write). In practice, this is on the order of single-digit milliseconds — well within the acceptable range for Ethereum attestation timing. However, in edge cases where network latency is already borderline, this additional latency could cause a missed attestation.

  3. Not a substitute for proper key management. nklave enforces signing policies; it does not replace HSMs, key rotation procedures, or access controls on key material. It is one layer in a defense-in-depth security model, not the entire model.

  4. Ethereum and Cosmos only. nklave currently supports Ethereum BLS12-381 and Cosmos Ed25519 key types. Other PoS networks (Solana, Polkadot, Avalanche, etc.) are not currently supported.

  5. Depends on correct initial state. If nklave is initialized with incorrect or incomplete signing history (e.g., an EIP-3076 import that is missing recent entries), it may approve requests that conflict with messages signed before nklave was deployed. The initial import step is critical and should be verified carefully.

  6. Open-source maintenance model. As an open-source project, nklave depends on community and Cryptuon-funded maintenance. Institutional operators who require SLA-backed support should contact Cryptuon about enterprise support options.


Frequently Asked Questions

Does nklave work with all Ethereum consensus clients?

Yes. nklave implements the Web3Signer REST API specification, which is supported by Lighthouse, Teku, Prysm, and Lodestar. Any consensus client that can be configured to use a Web3Signer-compatible remote signer can use nklave as a drop-in replacement. No client-side code changes or patches are required.

Can I run nklave alongside Web3Signer?

Yes, and this is a recommended deployment pattern for operations that want both policy enforcement (nklave) and HSM-backed key management (Web3Signer). In this configuration, the validator client sends signing requests to nklave, which evaluates policies and forwards approved requests to Web3Signer, which performs the actual signing using keys stored in an HSM or vault. The chain is: Validator Client -> nklave -> Web3Signer -> HSM.

What happens if nklave goes down?

If the nklave process crashes or becomes unavailable, the validator client will be unable to obtain signatures. This means validators will miss attestations and proposals, incurring inactivity penalties. However, inactivity penalties on Ethereum are extremely small compared to slashing penalties — approximately 0.00001 ETH per missed attestation versus a minimum of 1 ETH for a slashing event. A brief nklave outage is a minor operational issue; a slashing event is a major financial loss. The design intentionally favors safety over liveness.

How does nklave handle client diversity rotations?

The standard procedure involves exporting slashing protection data in EIP-3076 format from the old client and importing it into the new one — an error-prone process that is a common source of slashing events.

With nklave, signing history is maintained independently of the validator client. When you switch clients, you point the new client at the same nklave instance. nklave’s state is continuous and unaffected by the swap, making client diversity rotations significantly safer.

Is nklave suitable for solo stakers?

nklave is designed primarily for institutional and professional staking operations that manage multiple validators and have compliance requirements. Solo stakers running a single validator can benefit from the additional protection, but the operational complexity may not be justified for simple, single-machine setups. That said, solo stakers who frequently migrate between clients or run validators on cloud infrastructure (where duplicate instances are a real risk) may find nklave valuable.

Does nklave support distributed validator technology (DVT)?

nklave is complementary to DVT solutions like SSV Network and Obol. In a DVT setup, nklave can be deployed on each operator’s node to provide policy enforcement before participating in the threshold signing ceremony. Each DVT operator node configures nklave as its local signer, and nklave forwards approved requests to the DVT middleware.


Conclusion

Validator slashing is an operational risk with a known solution. The protocol-level mechanics are well-defined, the failure modes are well-documented, and the cost of an incident is quantifiable and significant. What has been missing is a dedicated, independent, auditable policy-enforcement layer that catches dangerous signing requests before they reach the key material.

nklave provides that layer. It is a policy-enforcing trust boundary that sits between your validator client and your signing keys, applying slashing-prevention rules to every request, logging every decision, and maintaining rollback-resistant state that survives infrastructure failures.

For institutional staking operations, the value proposition is clear: MIT-licensed software that costs nothing to deploy and prevents the most expensive class of operational failure in Proof-of-Stake networks.

Explore Cryptuon’s full suite of blockchain infrastructure tools at cryptuon.com/solutions, or reach out to our team for help deploying nklave in your staking operation.

DS

Dipankar Sarkar

Founder, Cryptuon

Blockchain researcher and systems engineer. Author of 5 published papers on cross-chain composability, MEV mitigation, and DePIN protocols. Building production blockchain infrastructure in Rust and Zig.

Build on research-grade infrastructure

Ready to deploy smart contracts across chains, execute atomic cross-rollup swaps, or protect your validators from slashing?