-
-
Published
Linked with GitHub
# Heuristically Partitioned Attestation Aggregation
Attestation:
- bitfield of N bits, 1 per validator of the committee attesting
- a BLS signature, an aggregate of the validator signatures corresponding to the set bitfield bits.
## Goal
Sign an attestation as validator, put it on the network, and have it be aggregated with all the other sigs.
## Problem
BLS is linear and we only use a bitfield: everyone should only aggregate an attestation into the signature once: if the corresponding bit is still 0.
Gossiped attestations may have overlapping aggregate bitfields; they cannot be aggregated.
## Idea
Aggregate attestations fit together like dominoes: some aggregate constructions align better with other aggregate forms than others.
As an heuristic, if certain classes of "shapes" are prioritized over others, aggregates with a bigger chance to overlap can be avoided all together.
And with a sufficient amount of validators sharing common heuristics, a herd immunity against badly shaped aggregates may arise.
The basic highly desired form of aggregation (as Handel works towards with strong but privacy-breaking routing/grouping) is to think of the bitfield as a binary tree:
```
.^
.^ .^ <- etc.
.^ .^ .^ .^ <- pairs of pairs
.^.^.^.^.^.^.^.^ <- pairs
???????????????? <- bits
```
Instead, a bitfield with a friendly shape can be prioritized over one with an unfriendly one:
```
Good, fit towards a subtree:
.^
.^ .^
.^ .^ .^ .^
.^.^.^.^.^.^.^.^
1010111100000000
Good, fit towards a subtree:
.^
.^ .^
.^ .^ .^ .^
.^.^.^.^.^.^.^.^
0000000011101011
Bad, not fitted towards a subtree:
.^
.^ .^
.^ .^ .^ .^
.^.^.^.^.^.^.^.^
0100100010101001
```
Aggregates that are fitted to subtrees can be aggregated with aggregates of similar size but complementing subtrees.
Of course, if only considering 1 subtree for this preference, we effectively have binary merging like Handel, but with inefficient routing.
To improve on that, the cost function that tells what is good and bad should also reward multiple well-filled subtrees, e.g.:
```
Reasonable, fit towards two equal size subtrees:
.^
.^ .^
.^ .^ .^ .^
.^.^.^.^.^.^.^.^
0000110100001011
```
This still has a good chance of being aggregated to a fuller tree by combination of smaller reasonably filled subtrees, or aggregates of these subtrees.
### Cost function
To implement this, a cost-function should be formalized that:
- Prefers bitfields with bits aligned to subtrees:
- Smart partitioning
- Prefers subtrees to be fuller:
- Optimal partition use
- Prefers bitfields with not too many of these partitions:
- Improve chances of large-partition merges
- Dislikes bitfields with stray bits outside of partitions:
- Partition merges of similar size are better
- Prefers bitfields with subtrees of equal size to be all left hand, or all right hand.
- Increase the chances of complementing bitfields, even if they contain multiple subtrees.
### Gossiping
Based on the cost-function, aggregates can be dropped, or expedited to propagate on gossipsub.
Dropping is important, as otherwise it will just forward everything to everyone, and we could just as well aggregate from individual signatures locally.
### Aggregation itself
To get these heuristic aggregates, a node should buffer attestations and decide on how to aggregate them, if at all, based on the cost-function.
### Intermediates
Carl suggested that during aggregation, multiple levels of fullness, but each high-valued (e.g. smaller and bigger full subtrees) may be achieved.
In this case it may be favorable to gossip some of these intermediates: the smaller but high-valued aggregates may fit in other complements well, and improve on the total aggregate.
### Worst case
If nothing is dropped, and yet no partitions are made, we have status quo: local aggregation, `O(n**2)` on the network.
### Best case
If every step in the way results in a full bigger partition, we have the (unrealisitc) dream aggregation, `O(n*log(N))` on the network. (Note: better could be achieved, but requires serial routing)
### Average case
The exact cost-function is not formalized yet, and with many different realisitc and less realistic network topologies it is hard to estimate now.
However, if a big enough portion of the network maintains this heuristic aggregation, and the network is a reasonably random mesh, I think we may get to `O(n*log(N))`, some constant factor over the best case.
## Extensions
### Shuffling
Any heuristic that prefers certain groups of bits to aggregate together has a problem with luck: a validator may end up with bad peers, and repeatedly fails to be part of a smaller aggregation because of this.
To work around this, the mapping of the bitfield bits to the validator indices can be shuffled each slot (or other time frame): the heuristics do not change, but the validators get to work construct the smaller aggregations with different peers each time, so they will depend on the same validators as much.