# Fusaka-Devnet-5 BPOs
## Links
- https://notes.ethereum.org/@jtraglia/fusaka_scheduling
- https://github.com/ethpandaops/fusaka-devnets/blob/master/ansible/inventories/devnet-5/inventory.ini
- https://github.com/ethpandaops/xatu-analysis
## Intro
Currently we collect vast Mainnet data. Unfortunately with the addition of PeerDAS this data is not going to be helpful when picking BPO values for the fork. The current idea is to ship Fusaka with extra safe initial BPO values, collect data, then more aggressively scale up with more confidence shortly after.
But what do we set those initial BPO values to? Enter `fusaka-devnet-5`.
`fusaka-devnet-5` is more of a smoke test than anything. We need to pick values that will be safe while also providing enough data to provide insight in to future BPO values. Since it is very hard to replicate Mainnet on a devnet we're checking things work on a fundamental level but not much beyond that. The network has a high node count (1-2k) to help bridge this gap, but this is still ~1/8th of mainnet.
The network has a mix of CL/EL clients, supernodes/full nodes, and MEV/Non-Mev slots. Full nodes have artificial bandwidth limits applied to match [EIP-7870](https://eips.ethereum.org/EIPS/eip-7870).
## Analysis
`fusaka-devnet-5` has a BPO fork scheduled every 24hrs. This analysis will focus primarily on the BPO values of `max=21`. There are obviously MANY angles to cover here. Luckily we can cover a lot of bases with a single metric: `Attestation Head Correctness %`
> For each slot take all attesters and derive what % of them voted for the block that was proposed in their slot. Importantly, we ignore attestations for the block from later slots.
Derived from Xatu Clickhouse. This shows whether the attester's node saw the block + columns before 4s and sidesteps other complexity. If an attester votes for a parent block it means they didn't see the block/columns in time. **This is a very nice proxy metric that groups lower-level metrics in to a single data point.**
With Xatu we have the ability to filter and group this data on both the proposer side AND the attester side. E.g. "Show slots that were proposed by all supernodes and attested to by Lodestar regular nodes when proposed via mev-relay, grouped by proposer CL type"
### Tests
#### 1. Can a full node propose a block at max blob count without MEV-Boost?
> As per [EIP-7870](https://eips.ethereum.org/EIPS/eip-7870), full nodes have 50mb of upload to play with. They have to gossip all 128 columns to their peers, and 66% of the slots attesters have to see those columns before 4s.
> Note: Clients might support a `--max-blobs` flag for Mainnet launch to help here.
Metric: `Attestation Head Correctness %`
Proposer Filters:
- Node Type: Regular (Full nodes only)
- CL Clients: All ✓
- EL Clients: All ✓
- Block Building: Locally Built Blocks Only
Attester Filters:
- Node Type: All (Full nodes + Supernodes)
- CL Clients: All ✓
- EL Clients: All ✓
#### 2. Can supernodes see 128 columns at max blob count on time?
> Nodes will only attest to a block once they see all the columns they're custodying. In the case of Supernodes, this is all 128 columns. We need to assert that these nodes can consistently see all columns well-before 4s. The worry is that it only takes 1 column to arrive late and a large % of validators won't attest to the block. Will validators reconstruct in time?
Metric: `Attestation Head Correctness %`
Proposer Filters:
- Node Type: All ✓
- CL Clients: All ✓
- EL Clients: All ✓
- Block Building: All ✓
Attester Filters:
- Node Type: Supernodes
- CL Clients: All ✓
- EL Clients: All ✓
#### 3. Can the network recover columns in time?
If the proposer fails to publish a single column, can the network recover that column in time to save the slot? Some lighthouse nodes have been configured to randomly drop some columns. We can look through logs for some of these slots and see if they were reorged or not.
Metric: `Attestation Head Correctness %`
Proposer Filters:
- Node Type: All ✓
- CL Clients: All ✓
- EL Clients: All ✓
- Block Building: All ✓
Attester Filters:
- Node Type: Supernodes
- CL Clients: All ✓
- EL Clients: All ✓
#### 4. Syncing
Can new nodes join the network to support the blob count?
[Syncoor](https://syncoor.fusaka-devnet-3.ethpandaops.io/#/?directory=fusaka-devnet-3&network=fusaka-devnet-3) can run sync jobs on schedule.