-
-
owned this note
-
Published
Linked with GitHub
# Goerli blob analysis
## Introduction
The goerli public testnet underwent the Dencun fork on 17th Jan at ~06:30AM UTC. We had a brief non-finality incident from ~6:30AM UTC till ~10:30AM UTC, we saw a healthy and finalizing network after the hotfix was applied. The issue analysis can be found [here](https://notes.ethereum.org/@parithosh/dencun-goerli-debugging).
There has been no extra effort spent to peer the sentries together. They are reliant on the public network and should likely represent real world routing scenarios.
The sentries all currently run validators and are all utilizing mev-boost.
Note: Data from pre-dencun was collected via sentries running [eleel](https://github.com/sigp/eleel)(an execution layer multiplexer to reduce costs). This was missing an update and post dencun we had to rely on validator nodes with each CL having their own ELs. Please take this variance into account for the analysis comparing performance.
## Blob spam setup
From 10:40AM UTC till 5:40PM UTC, a 7h window - we ran [goomy](https://github.com/ethpandaops/goomy-blob) to spam blobs on the network. The spammer ran out of funds overnight and has now been restarted, it currently looks like there isn't much blob traffic outside of our artifically induced spam - Which makes sense as L2s haven't started using the feature yet.
This analysis would just be conducted in the above 7h timewindow.
## Blob analysis
![blob-overview](https://storage.googleapis.com/ethereum-hackmd/upload_88ca567584e4ec6538d9686ee56d53bc.png)
The data was collected via [Xatu](https://notes.ethereum.org/@ethpandaops/xatu-overview) and we processed ~30k blocks and ~80k blobs via sentries connected to 20 CLs spread across various regions.
![unique-blobs](https://storage.googleapis.com/ethereum-hackmd/upload_ffe6c511c569af779896e0adf0d53a19.png)
Of the blobs processed, ~5k were unique.
![blob-numbers](https://storage.googleapis.com/ethereum-hackmd/upload_d86772ede155d559fa6724e05a9b97ad.png)
Most blobs in this period had 6 blobs, i.e the max number.
[Link to blob numbers](https://grafana.observability.ethpandaops.io/d/be15cc56-e151-4268-8772-f4a9c6a4e246/blobs?orgId=1&from=1705488924273&to=1705513546572&var-network_name=goerli&var-client_name=All&var-consensus_implementation=All&var-consensus_version=All&var-geo_continent_code=All&var-heatmap_interval=250&var-interval_tight=%24__auto_interval_interval_tight&var-interval=%24__auto_interval_interval&var-interval_loose=%24__auto_interval_interval_loose&viewPanel=345)
![blob-propagation](https://storage.googleapis.com/ethereum-hackmd/upload_50b4b897be3a684f0595d0e255e974dc.png)
While in the min case, the blobs are propagated mostly in under 1s, the average and p50 shows us that they take ~1.5-2s for propagation with peaks regularly being at the 3s mark. This means we would on average be able to meet the attestation deadline of 4s.
[Link to blob propagation graph](https://grafana.observability.ethpandaops.io/d/be15cc56-e151-4268-8772-f4a9c6a4e246/blobs?orgId=1&from=1705488924273&to=1705513546572&var-network_name=goerli&var-client_name=All&var-consensus_implementation=All&var-consensus_version=All&var-geo_continent_code=All&var-heatmap_interval=250&var-interval_tight=%24__auto_interval_interval_tight&var-interval=%24__auto_interval_interval&var-interval_loose=%24__auto_interval_interval_loose&viewPanel=184)
![avg-blob-propagation](https://storage.googleapis.com/ethereum-hackmd/upload_17e1efd887aa070daab8480c02b8dbdb.png)
The p95 graph is however a bit more concerning. While we are able to mostly be under the 3s mark, there are indeed regular peaks that hit the 5s mark into the slot.
![blob-heatmap](https://storage.googleapis.com/ethereum-hackmd/upload_123b6e8eac4a6650f32f2a0070ad0962.png)
The heatmap reinforces what we saw in the previous graphs. We see a healthy heatmap around the 500ms mark, with most blobs being received between 1.5s and 2s. The outliers of >4s would make up the p95 lines of the previous graphs.
[Link to blob heatmap](https://grafana.observability.ethpandaops.io/d/be15cc56-e151-4268-8772-f4a9c6a4e246/blobs?orgId=1&from=1705488924273&to=1705513546572&var-network_name=goerli&var-client_name=All&var-consensus_implementation=All&var-consensus_version=All&var-geo_continent_code=All&var-heatmap_interval=250&var-interval_tight=%24__auto_interval_interval_tight&var-interval=%24__auto_interval_interval&var-interval_loose=%24__auto_interval_interval_loose&viewPanel=185)
![blobs vs slots](https://storage.googleapis.com/ethereum-hackmd/upload_d24776c6201424887dd4ecce69280471.png)
The above graph describes the blob propagation depending on how many blobs were included in a slot. The graph is mostly flat with a lot variance, this means that the blobs took the same time to propagate irrespective of how many blobs were proposed in a slot. This is definitely good news and indicates that the blobs are not overwhelming the network.
[Link to blob sidecar propagation vs slot graph](https://grafana.observability.ethpandaops.io/d/be15cc56-e151-4268-8772-f4a9c6a4e246/blobs?orgId=1&from=1705488924273&to=1705513546572&var-network_name=goerli&var-client_name=All&var-consensus_implementation=All&var-consensus_version=All&var-geo_continent_code=All&var-heatmap_interval=250&var-interval_tight=%24__auto_interval_interval_tight&var-interval=%24__auto_interval_interval&var-interval_loose=%24__auto_interval_interval_loose&viewPanel=195)
![block vs blobs](https://storage.googleapis.com/ethereum-hackmd/upload_be63c129e6b1250f8ab32de2ba6dbfbf.png)
The above graph describes the block propagation time based on the number of blobs in a slot. This graph is also quite flat and indicates no stressing factor.
[Link to block prop vs slot graph](https://grafana.observability.ethpandaops.io/d/be15cc56-e151-4268-8772-f4a9c6a4e246/blobs?orgId=1&from=1705488924273&to=1705513546572&var-network_name=goerli&var-client_name=All&var-consensus_implementation=All&var-consensus_version=All&var-geo_continent_code=All&var-heatmap_interval=250&var-interval_tight=%24__auto_interval_interval_tight&var-interval=%24__auto_interval_interval&var-interval_loose=%24__auto_interval_interval_loose&viewPanel=217)
![Time between first and last blob](https://storage.googleapis.com/ethereum-hackmd/upload_1779d1ef431775d400a9c9041ed48c15.png)
The time between first blob received and last is quite narrow, with the variance being 150ms to 250ms between 2 blobs and 6 blobs. This graph however can be influenced by the source of the blobs, since the source is via blob spamming - it could all be from the same source mempool.
## Block Propagation and Attestation metrics
We will look at the data from the last 2 days, so we can compare the effect of the Dencun upgrade.
![block-prop](https://storage.googleapis.com/ethereum-hackmd/upload_856af3452fd9e4173bb6f9f3f0d67e51.png)
We see that the block propagation was a bit more scattered in the heatmap, then we had the non-finality period and finally the finalized dencun chain. Post dencun the nodes seem to indicate a lower variance in propagation. The blocks are mostly propagated in under 2s with peaks at the 3s mark. This is healthy as the attestation deadline is ~4s. The heatmap reinforces this view.
![att-heatmap](https://storage.googleapis.com/ethereum-hackmd/upload_ce7fbcbc0c0d43493bbea394ac029f85.png)
The attestation spread is a bit clearer to read than the block propagation heatmap, but indicates the same thing. The Dencun fork seems to have reduced some variance. The attestations seem to be performed at a healthy rate, concentrated at the expected 4s mark. There are some outlier attestations that take long, this is likely due to the p95 values being higher.
![reorg](https://storage.googleapis.com/ethereum-hackmd/upload_bc5aa96ecb6ad10b280f9bf92fa46a77.png)
We did see some reorgs, but they were scoped to depth 1 and weren't a regular occurance.