-
-
Published
Linked with GitHub
# Dedicated minimal rollup for keystores
Account abstraction wallets will need to allow users to store funds across a large number of L2s. They will also need to support the ability of a user to update their keys, without requiring a transaction on each L2, especially L2s where they have not yet deployed their account contracts. The simplest way to achieve this is to store a **keystore contract** for each user on L1, and have users' account contracts on the various L2s read the user's keystore to determine the current public key that they need to check the signature against.
However, in the medium and long term, the L1 is likely too expensive, and so it would be better to store keystores on an L2. But this introduces another challenge: because the keystore is so central to a user's security across _all_ L2s, it's vital for it to be as simple as possible. It should not support a full EVM; instead, it should support a very minimal set of operations that allow a user to store and update their key. Additionally, it should be maximally friendly to ZK storage proofs, allowing keys in the keystore to be used for privacy-preserving note systems in addition to "regular" accounts.
### What kind of generality does this L2 need to have?
So far, we have seen AA wallets using the following types of cryptography:
* secp256k1 ECDSA signatures
* secp256r1 ECDSA signatures
* EdDSA keys
* "[ZK email](https://archive.devcon.org/archive/watch/6/zkemail-decentralized-id-verification-on-chain-without-servers/)" and similar systems using ZK-SNARK proofs of other cryptography
In addition, one major goal of account abstraction is quantum safety, and so we should include at least one quantum-safe signing method; for simplicity, we can stick with Winternitz signatures.
ERC-4337 account abstraction uses a full EVM to define key management logic. In this L2, we can have something much simpler: (i) a key, and (ii) a mechanism for changing the key. That said, if the mechanism for changing the key is going to include ZK email, then we may as well just use ZK-SNARKs for everything.
### A quasi-specification
The minimal keystore rollup (MKSR) maintains a **state** which is a **key-value map**, where the **key** is a field element, and the **value** is a tuple of (i) data, intended to store verification and encryption keys, and (ii) a verification key for a ZK-SNARK to change that data.
The value at a particular key can be changed in two ways:
* If the value is zero, then the value at key `hash(data, vk)` can be set to `(data, vk)`
* Otherwise, the data `state[A].data` and the verification key `state[A].vk` can be reset by providing a SNARK that passes the verification key stored at `state[A].vk`
The first criterion enables counterfactual addresses: for every `(data, vk)` pair, there is an address which can only be initialized to that `(data, vk)` pair. This in turn means that for every `(data, vk)` pair it is always possible to construct an address on Ethereum or any CREATE2-capable L2 which looks at that key of the MKSR state, and so is by-default mapped to that `(data, vk)` pair.
To verify blocks, **we do not use recursive SNARKs**. Instead, we simply use batch verification: PLONK allows for a batch verifier to verify N proofs (with N different verification keys, from N users) with O(N) elliptic curve multiplications but only a _single pairing_. AZTEC's numbers suggest that this requires only ~5000 gas per transaction. To the N SNARK provided by users, we also add a SNARK proving that the state changes in the tree (which could be eg. a Poseidon sparse Merkle tree, or something based on [this](https://ethresear.ch/t/a-nearly-trivial-on-zero-inputs-32-bytes-long-collision-resistant-hash-function/5511); simplicity is more important than optimality) are correct.
That is, a block data structure is as follows:
```python3
class Block(Container):
transactions: List[Transaction]
pre_state_root: Hash32
post_state_root: Hash32
state_change_proof: PlonkProof
class Transaction(Container):
key: FieldElement
new_data: Bytes # Maximum 256 bytes
new_vk: Optional[PlonkVerificationKey]
proof: Optional[PlonkProof]
```
The `state_change_proof` proves that `post_state_root` is the valid state root of taking a tree whose root is `pre_state_root`, and then for each transaction, setting `state[tx.key].data = tx.new_data` and if `tx.new_vk` is nonempty setting `state[tx.key].vk = tx.new_vk`.
The `proof` in each transaction is a proof that verifies against `state[tx.key].vk`, with `(new_data, new_vk)` mixed in to generate Fiat-Shamir randomness (to "bind" the proof to that data and make it work as a signature). If `state[tx.key] = 0`, then it is allowed for the `tx.proof` to be empty, and `tx.key = hash(tx.new_data, tx.new_vk)` is checked instead.
### How this would be used
A user's wallet would generate (i) a `data` string, which would generally be a public key or in the case of a multisig wallet a concatenation of multiple public keys, and (ii) a verification key (`vk`) that specifies the instructions for changing the `data` and the `vk`. The wallet would compute `key = hash(data, vk)`, and then for each chain generate the `CREATE2` address for a wallet which demands a claim, verified with a ZK-SNARK through [ERC-4337](https://eips.ethereum.org/EIPS/eip-4337) aggregation, of the `data` at the given `key` and then checks a signature using that `data` as a public key.
To perform a recovery, a user would generate a `Transaction` to change their `data` and/or `vk`. They could pay for inclusion out-of-band, or we could add a `fee` parameter into the `Transaction` (which would in turn require adding a `balance` to the state). If either option does not work, a user can always perform a recovery on their own by generating and publishing the proof on L1. The hope is that we can save gas costs by batching proofs between _both_ recovery operations _and_ accesses for user-signed transactions on L1.
### Extennsion: keystore privacy
Suppose you have many accounts, either on one L2 or across many L2s, and you want them to all use the same keystore, but not be publicly linkable. It is not very difficult to extend the above scheme to do this. Instead of providing the `key` and the `MKRS.state[key].data` (ie. the public key) in the clear, we can do one of the following:
* The signer generates a ZK-SNARK that directly proves that they generated a valid signature with their public key. If we use Schnorr signatures, this essentially means SNARK-proving over a couple of elliptic curve additions and multiplications and an extra hash. The public key and the signature are both never published in the clear.
* We treat the `MKRS.state[key].data` itself as a verification key, and require the signer to provide a ZK-SNARK that matches that verification key.
For both cases, we could extend the ERC-4337 aggregator to support these kinds of proofs.
This essentially gives us [stealth addresses](https://vitalik.ca/general/2023/01/20/stealth.html) "for free", in addition to solving the privacy problem for [social recovery guardians](https://vitalik.ca/general/2021/01/11/recovery.html).
### Rationale
Other possible design choices for this construction include:
* Just making it a ZK-EVM rollup: this would introduce much more complexity. The goal of this construction is to be extremely strict on minimizing the number of lines of code in the rollup. Notice that **in the design above, the only ZK-SNARK circuit code that is baked into the rollup itself is Merkle branch lookups**.
* Doing one layer of recursive SNARKing to verify proofs inside the circuit. This may be worth doing eventually, as it would save (i) marginal costs of proof verification, namely elliptic curve multiplications on L1, and (ii) data costs of needing the proof to go on-chain, but this would greatly add complexity, as that would require _much_ more ZK-SNARK circuit code. Furthermore, recoveries are rare, so there is not much need to go that far.
* Using a KZG map instead of a sparse Merkle tree to store state. This could remove the need for "circuit code" completely, and make loading the `data` easier in a regular transaction (as it would require only one pairing and one 48-byte proof), and it is worth exploring! The main weakness is that a KZG map has a bounded size, and this introduces risks for counterfactual users if a map becomes full while they are offline.