by @mmjahanara, @pierre
Thanks to @b-wagn, @soispoke and @levs57 for the helpful discussions and comments which led to this writeup.
TLDR; We go over why wormholes’ plausible deniability property doesn’t seem compatible with Ethereum’s 160 bits addresses. Also, we shed light on EIP-7503’s claimed anonymity set, which is much lower than expected in practice. While the latter issue should be solvable, we believe the first might require to rethink the overall wormhole setup. We conclude the post with a potential, briefly described, follow-up design using beacon chain deposits
On Wormholes
Ethereum privacy solutions have historically relied on application-specific anonymity sets. In protocols like Tornado Cash, the act of depositing funds is an explicit interaction with a specific smart contract. This creates a fundamental flaw: while the link between depositor and withdrawer is broken, the participation in the privacy protocol is public. These designs allow chain observers to label and potentially censor all depositors to the anonymity set.
EIP-7503 (zero-knowledge wormholes) proposes a different paradigm, relying on plausible deniability.
The mechanism is simple: users “burn” funds by sending them to a cryptographically determined unspendable address. Later, they provide a (non-interactive) zero-knowledge proof (NIZK) that they know the pre-image of that address to re-mint the funds in a fresh account. Crucially, the “deposit” (or “burn”) is indistinguishable from a standard ETH transfer to a fresh address.
However, we believe there might be a fundamental tradeoff between plausible deniability and security, which may hinder the enshrinement of wormholes within the L1 today. In addition to this, we also shed a light on an issue with respect to EIP-7503’s effective anonymity set, which might be lower than expected.
Notation and Assumptions
We will use:
- \mathsf{H}(\cdot)H(⋅) for a generic 256-bit hash function (e.g., SHA3, SHA-256, …).
- \operatorname{trunc}_{160}(x)trunc160(x) for truncation of a 256-bit value xx to a 160-bit Ethereum address.
- Domain-separated hashes as \mathsf{H}(\text{“TAG”} \parallel \cdots)H(“TAG”∥⋯), where \text{“TAG”}“TAG” is a fixed ASCII prefix (e.g. \text{“worm”}“worm”, \text{“null”}“null”).
EIP-7503
In the original EIP-7503 specification, a burn address is derived from a single secret ss:
In the first step, the user burns its funds by picking a random ss and sending them to Addr_{burn}(s)Addrburn(s). Later, the user can mint funds by providing to a smart contract a nullifier \nu = \mathsf{H}(\text{“null”} \parallel s)ν=H(“null”∥s) and a NIZK proof that \nuν is consistent with some existing transaction A\to BA→B (i.e., uses ss s.t. B = Addr_{burn}(s)B=Addrburn(s)). The contract verifies the proof and checks that \nuν has not been submitted before. If these checks pass, the funds are minted.
The security properties of such a burn-mint mechanism aren’t explicit, so let us informally define them.
Correctness / Completeness
If a user behaves honestly, then it can mint after burning.
Privacy
- Unlinkability
Intuitively, privacy here means that no one (except the user itself) can link the burn transaction to the minting process, i.e., the mint transaction submitted to the mint contract. To be a bit more precise, we consider the user being honest in this case, and want that no observer of the chain can tell which transaction was the burn transaction that corresponds to the mint transaction.
- Plausible deniability
The second privacy property that we want is that the burn transaction should look like a “regular” transaction. In particular, the burn address BB should look like a random address. Plausible deniability is nice and is one of the major difference with tornado-cash like protocols requiring explicit contract interactions. With wormholes, participation in the burn protocol remains private - note that withdrawing remains public.
Non-Inflation
This is the idea that we don’t want ETH created out of thin air and means that one can only mint if one burned before: our map from any successful mint transaction to a burn transaction should be injective. At a first glance, this would mean preventing an adversary from producing two different nullifiers for the same burn transaction. Indeed, if an adversary can produce such two different nullifiers for the same burn transaction, we would end up with an inflation bug.
Since \mathsf{H}(\cdot)H(⋅) is hiding, wouldn’t that be straightforward? Well, as noticed early on by the Scroll team, this is where things get tough.
The birthday paradox
The birthday paradox implies that finding any two secrets s_1, s_2s1,s2 such that the 160-bit addresses collide
takes approximately 2^{80}280 operations. Although large, such a number remains within the reach of nation-states or massive mining pools. When an attacker finds such a collision, the protocol breaks:
- Deposit 100 ETH to the colliding address once.
- Mint with nullifier \nu_1=\mathsf{H}(\text{"null"} \parallel s_1)ν1=H("null"∥s1) and \nu_2=\mathsf{H}(\text{"null"} \parallel s_2)ν2=H("null"∥s2). These are two distinct nullifiers, so 100 ETH are minted twice.
- Withdraw 200 ETH.
Since the verifier only checks that each nullifier is used once, it cannot tell that these two withdrawals are backed by the same on-chain deposit. This is an infinite inflation bug: 160-bit address collisions at the 2^{80}280 birthday bound translate directly into double-mints.
However, note that a chain with a larger address space will not have to face this security shortcoming: the security of scheme would be satisfying with full-size 320-bit addresses. Ethereum’s 160-bit truncation is an optimization that it could, in theory, move away from.
Tentative solution: remove collisions search and make the scheme hard to break with dlog instead?
Let us propose an idea: could we work around the birthday paradox and instead of requiring from the attacker to find collisions to ask him to instead break dlog?
Say that we work with the following NIZK on the statement (\mathcal{R}_{root}, \nu)(Rroot,ν) and witness (sk,s_{worm},tx = (A,B,pk),\text{salt},\pi_{merkle})(sk,sworm,tx=(A,B,pk),salt,πmerkle) along the following constraints:
(1) \pi_{merkle}πmerkle authenticates txtx in \mathcal{R}_{root}Rroot;
(2) pk == \mathsf{SkToPk}(sk)pk==SkToPk(sk), i.e., pk = sk \cdot Gpk=sk⋅G for ECDSA keys;
(4) B == \operatorname{trunc}_{160}\!\big(\mathsf{H}(\text{"worm"} \parallel sk \parallel \text{salt})\big)B==trunc160(H("worm"∥sk∥salt));
(5) \nu == \mathsf{H}(\text{"null"} \parallel sk \parallel tx).ν==H("null"∥sk∥tx).
That setting would be pretty nice: a user could even re-use its burn address when using the same \text{salt}salt value for two different burn transactions!
The security argument should also be straightforward: double-minting now requires breaking the NIZK, ECDSA, or the hash used for \nuν, rather than exploiting a 2^{80}280-cost address collision. Indeed, for a fixed burn transaction (and thus fixed A, pk, BA,pk,B), there is only one valid sksk, or you would otherwise be able to find two different secret keys mapping to the same pubkey that initiated the transaction. Thus, all colliding witnesses map to the same nullifier.
Is it the fix?
Not really, because preventing nullifier collisions isn’t the only issue!
Wormholes should also prevent burn addresses from being spendable. As noted by the original EIP-7503, an attacker could find a pair of secrets (s_1, s_2)(s1,s2) such that create2_address(..., s1) == sha256("wormhole" || s2). In that setup, an attacker could burn funds to sha256("wormhole" || s2) and withdraw them subsequently from create2_address(..., s1). Although that means the attacker could extract their ether twice (only), that still should be considered failure.
But are we really concerned given our above updated circuit? Since our design requires an attacker to break dlog, can’t we simply require for EXTCODESIZE to not be 0 when burning funds? That would prevent an attacker from withdrawing funds burned to create2_address(..., s1).
The thing is that it is a mistake to think in the first place that the above scheme requires an adversary to break dlog. The game for an attacker to control the burn address remains, in fact, pretty much the same as the one involving the CREATE2 opcode, regardless of the proof of knowledge on the private key.
In our setup, for a burn address to be spendable, the game is the following: if the attacker is able to find two secrets (s_1, s_2)(s1,s2) such that trunc160(H(\text{“worm”} || s_1)) == trunc160(H(skToPk(s_2)))trunc160(H(“worm”||s1))==trunc160(H(skToPk(s2))), he wins. Indeed, when that’s the case, the attacker ends up with a spendable burn address, since finding such a pair lets the attacker initiate a transaction from trunc160(H(skToPk(s_1)))trunc160(H(skToPk(s1))) to trunc160(H(\text{“worm”} || s1))trunc160(H(“worm”||s1)), an address controlled with s_2s2!
We are back to square one: this is the same collision search problem with only 80 bits in complexity.
Today’s plausibly deniable wormholes require to work with collision resistance
Ignoring detection heuristics, wormholes’ plausible deniability requirement entails that no one observing the L1 is able to distinguish wormhole transactions from regular ones. This property naturally requires the burn address computation to remain private to the user initiating the burn. But, because of the collision resistance game, there is no way for users to prove with more than 80 bits of security that the secret they used to generate the private 160 bits burn address does not as well control it.
A way out could be to enshrine the burn address within the protocol itself (say sending funds to trunc160(H(\text{“worm”}))trunc160(H(“worm”)), turning the problem into a pre-image search task. But that setting would make us lose the plausible deniability property that we are looking for. We end up stuck in having to choose between a weakly secure but plausibly deniable design and another one pretty much equivalent to tornado-cash like setups.
We list here three questions that we think are valuable for improving the state of wormholes:
- Are there any other primitives we could leverage to enshrine plausibly deniable wormholes in the L1? We think the answer is yes and are initiating follow-up work on leveraging beacon deposits instead. We briefly detail the idea in the below add-on.
- Would there be a way to build wormholes from a pre-image search problem with plausible deniability? We aren’t sure of this, it seems quite hard to work with a deterministic but private burn address.
- Are there any low-hanging L1 changes that would let us answer one of the above two questions positively?
Add-ons
EIP-7503 effective anonymity set
There is a practical issue for which we aren’t sure this has been discussed properly. In doubt, we are writing things here to make it clear.
EIP-7503 exposes the beacon_block_root as a public input of the mint proof. But this decision shrinks the claimed anonymity set (all of the Ethereum accounts with zero outgoing transactions) to transactions which occurred in that block, since the corresponding burn transaction must be in the referenced block. If wormhole protocols decide to use transaction receipt roots as input to users’ mint proof, keeping a large enough anonymity set will require access to an accumulator providing the ability to obtain authentication paths to all previous transactions.
While this should work in theory, since every block root hash is a commitment to all transactions prior to that block, this proof would either require an efficient client-side recursive snark or a particular L1 setup - say replacing keccak with an arithmetization friendly hash or extending block headers with a field committing to a merklized block root containing all previous blocks hashes.
Follow-up work: Wormhole meets beacon deposits
The beacon deposit contract is a one-way bridge from the execution layer to the beacon chain. As deployed today, each validator is publicly linkable to its deposit and withdrawal destination: the contract exposes a single deposit function that accepts a deposit together with the validator pubkey and withdrawal credentials (withdrawal_credentials).
There might be a way to design a private version of the contract, which would replace pubkey and withdrawal_credentials with a corresponding hiding commitment. Deposits would then be claimed on the beacon chain by providing a zk proof-of-knowledge. In this construction the validator is added to the activation queue only after the deposit has been claimed on the beacon chain. Note that unclaimed deposits can still be added to the withdrawal queue like any regular validator.
This would require some protocol re-architecture, but it could materially improve validator privacy while also acting as an in-protocol, wormhole-like mechanism with plausible deniability. Moreover, with roughly one-third of ETH staked and on the order of tens of millions of dollars in daily withdrawals (or more), the resulting anonymity set could be meaningfully large.
