owned this note
owned this note
Published
Linked with GitHub
# Alternate phase 2 architecture proposal
There is no longer any notion of shard state; instead, a shard block only contains a blob of data, $D$, a reference to a on-beacon-chain pure function ID, $i$, and a reduction output $r = functions[i](D)$. The beacon chain now contains a full stateful state transition engine à la eth1. $r$ is part of the block header, and can be capped at eg. 128 bytes.
### Building a simple state transition engine
Now let us look at how to build a simple state transition engine on top of this. Imagine a contract on the beacon chain, $C$, with two state variables: $slot$ and $state\_root$. It also stores a constant, a $MY\_SHARD$. The contract waits for one type of transaction, which provides a Merkle proof, going through a crosslink, to an $r$ value and $i$ value of a block header. $C$ verifies the following claims:
1. The Merkle proof is of a block from slot $C.slot+1$ of shard $MY\_SHARD$. Note that this can be done implicitly by calculating the generalized indices for the $i$ and $r$ values for that slot and using that generalized index to verify the Merkle proof.
2. The $i$ value equals a particular preset function ID $id_{verifier}$, the logic of which we'll describe below.
3. The $r$ value is of the form $(pre, post)$ where $pre = C.state\_root$.
If (1) fails, the execution does nothing. If (2) or (3) fail, the execution sets $C.slot += 1$. If all three checks pass, the execution sets $C.slot += 1$ and $C.state\_root = post$.
Now let's describe the function code that must be stored at $id_{verifier}$. The code takes as input a block $D$ and interprets it as coming in two parts. The first part is a Merkle multiproof verifying parts of a state (from this multiproof you can also calculate a state root, which we'll call $pre$). The second part is a set of transactions. The code then executes the transactions on the provided state. If any transaction attempts to access data that is not part of the provided multiproof, the code returns a failure (or just zero bytes). If the transaction execution succeeds, then let the resulting new state root be $post$; the code returns $(pre, post)$.
Logic for deposits and withdrawals needs to be added on, but is fairly simple. And this is all you need to build a state transition engine.
### Clever trick 1: using multiple shards
Suppose that you want to decrease your de-facto block time by 4x using [the technique here](https://ethresear.ch/t/near-instant-transaction-confirmation-via-staggered-shard-block-production/5643). To do this, you need only make a trivial modification to the contract: let $MY\_SHARDS = [s0, s1, s2, s3]$ be an array of four shard IDs. Modify the generalized index calculation for the Merkle proof, replacing (slot $C.slot + 1$ shard $MY\_SHARD$) with (slot $floor(\frac{C.slot + 1}{4})$, shard $MY\_SHARDS[(C.slot + 1)\ mod\ 4]$).
Essentially, we are just treating the blocks of four shards in order as though they are representing the execution of one single shard.
### Clever trick 2: synchronous cross shard blocks
_See https://ethresear.ch/t/simple-synchronous-cross-shard-transaction-protocol/3097 for an earlier exposition of an equivalent technique._
Suppose that you have a system similar to the simple state transition engine above, except it maintains a $slot$ and $state\_root$ for multiple shards (so we'll say $slot$ and $state\_root$ are both arrays). We will add a function for handling a special kind of block that we'll call a _multi-shard block_. We now interpret the outputs of blocks as five values: $(other, pre_{me}, pre_{other}, post_{me}, post_{other})$. We process this block with a function that accepts two such blocks, which we'll call $A$ and $B$, from shards $a$ and $b$. The function checks as follows:
* Let $a = B.other$ and $b = A.other$ for clarity.
* Check Merkle proofs of both blocks, using slots $C.slot[a] + 1$ for A and $C.slot[b] + 1$ for B and shards $a$ for A and $b$ for B. Check that the $i$ values on both blocks equal $id_{verifier}$.
* Check $C.slot[a] = C.slot[b]$ (if not, exit and do nothing)
* Check the following:
* $C.state\_root[a] = A.pre_{me} = B.pre_{other}$ and $C.state\_root[b] = B.pre_{me} = A.pre_{other}$
* $A.post_{me} \ne 0$ and $B.post_{me} \ne 0$
* $A.post_{me} = B.post_{other}$ and $B.post_{me} = A.post_{other}$
* If any of the above fails set $C.slot[a] += 1$ and $C.slot[b] += 1$ and exit. If all of the above succeed set $C.state\_root[a] = A.post_{me}$ and $C.state\_root[b] = B.post_{me}$
Now, the logic for the function code. It expects blocks to have four parts: ((i) a shard ID $other$, (ii) a Merkle proof for the pre-state $pre_{me}$, (iii) a Merkle proof for the pre-state $other_{me}$, (iv) transactions. It executes the transactions using both states, and outputs the final state root $post_{me}$ (if transaction execution fails, it outputs $0$).
Notice now that you can have synchronous execution between two shards, but there are tight conditions to ensure that it happens correctly: the transitions executed in both shards are the same (though the scheme could be extended to allow the block to have another section of shard-local transactions which could be different between $A$ and $B$). The synchrony is enforced by having the contract on the beacon chain check the two state roots against each other on the same synchronous environment (the beacon chain) after the fact.
### Clever trick 3 (exercise to the reader)
* An asynchronous system where a block at slot $n+1$ on one shard has access to the post-state root at slot $n$ on other shards.
### Clever trick 4: doing update processing in shards
One problem with these schemes is they require quite a bit of on-beacon-chain overhead: one Merkle branch per block per shard. We can reduce this overhead by creating yet another in-shard function which processes a batch of updates. It would provide an output consisting of a set of storage positions in a contract of any of the above types to update, their pre-states and their post-states.
This scheme could potentially be applied recursively, allowing for O(1) beacon chain computation to "commit" an arbitrarily large amount of in-shard computation, at a low constant overhead (because the sum of an exponentially decreasing series is constant).
## Alternate alternate proposal
Stick with the current proposal 2. Instead, achieve the benefits of all of the above techniques by using clever trick 4 itself. That is, the state root manager contract $C$ would itself be an execution environment.