# fusaka-syncing-bugs Here is a collection of overall bugs across devnet 3 and mainnet: **Prysm**: During the latest non finality, we had an issue when starting a node from a checkpoint with a pre-existing DB. (Starting from genesis was OK, and starting from a checkpoint with an empty DB was OK. This bug was possibly more likely to happen when the latest justified checkpoint was far away from the latest finalized checkpoint. (When, for example, we have a single checkpoint with > 2/3 participation). **Teku**: OOM bug: https://github.com/Consensys/teku/issues/9824 Syncing bug: https://github.com/Consensys/teku/issues/9826 Upgrade incident bug (affecting mainnet): https://hackmd.io/@teku/B1lVi-Scxg OOM bug mainly triggered by malicious node not getting rate limited. Example: 16Uiu2HAmHbojWU7RFGhFGpU1rVhcJJRjfffR5pZSF7RAojhwhNvv it was constantly asking 512 columns by root. Leading to huge mem spike and crashing teku nodes **Lighthouse**: issues maintaining peers across all custody subnets. Fix is in the works issues syncing on “forky” chains. Fix in the works. The fix should make syncing more stable in general validator overwhelming the BN with requests while the node isn’t synced (resolved) **Lodestar**: This is the relevant stuff that got merged after the non-finality event. Some of this though is not specifically pertinent but all is tangential to sync and peer scrore to maintain a mesh of peers to pull from - [increase max peers to 200 for better peer pool to request from](https://github.com/ChainSafe/lodestar/pull/8272) - [fix how we deserialize data from getBlobsV2 to drastically improve performance](https://github.com/ChainSafe/lodestar/pull/8275) - [self rate limiter to handle dead requests](https://github.com/ChainSafe/lodestar/pull/8295) - [serve data columns by root for finalized epochs](https://github.com/ChainSafe/lodestar/pull/8250) **Nimbus**: coming soon **Grandine**: we suffered from sync speed, and data redownload loop from the last finalized slot when it retry download many times but can't move local head. this is the expected behavior with our syncing strategy, the latest changes could partially fix this issue. but even though the network has been recovered (finalized again), the finalized slot still can't move as we can see in grandine-geth-1 instance, so I think it is a potential bug **Reth**: Incorrect blob fee comparison: https://github.com/paradigmxyz/reth/pull/18216 mainnet bug: https://github.com/paradigmxyz/reth/issues/18205 **Erigon**: We have still some issues with erigon. See ⁠Eth R&D⁠Erigon <> Prysm. Errors like: - [2025-09-01 08:14:14.42] ERROR execution: Got a validation error in newPayload error=invalid block[5/5 Execution] wrong trie root - [2025-08-28 17:17:14.38] ERROR sync: Could not reconstruct full bellatrix block batch from blinded bodies error=received 0 payload bodies from engine_getPayloadBodiesByRangeV1 with count=3 (start=211195): engine api payload body response is invalid Usually wiping everything fixes the issue. **NimbusEL**: Reported OOM issues being investigated. **Besu** : coming soon **Geth**: * CRIT [09-04|14:15:15.182] Failed to truncate extra state histories err="history head truncation out of range, tail: 154748, head: 244349, target: 244748" **Nethermind**: no reported issues