# Pectra state hashing performance with big balance deposit queue The problem with pending_deposits is that elements from this list (and others like pending_consolidations and `pending_partial_withdrawals`) are removed from the beginning of the list simulating the queue semantics. And such removal causes clients to re-compute the hash tree root for these queues almost from scratch vs append-only lists where the re-computation is incremental. So, what we expect is that at large number of deposits in the queue performance of epoch processing routine can degrade. To test that expectation, we've increased the number of pending balance deposits to a total of 100,000 entries and checked the epoch processing delays for lodestar & lighthouse. ## Timeline (UTC) Sep 23 11:50 - start filling up the queue with 10k topup deposits 12:16 - 10k topup deposits complete 12:46 - start filling up the queue with additional 45k topup deposits 14:30 - 45k topup deposits complete (50k pending balance deposits in state now) 16:00 - 46,4k pending balance deposits still in queue Sep 24 10:00 - queue down to 27k pending balance deposits 10:00 - start filling up the queue with 80k topup deposits 13:20 - 80k top up deposits complete (100k pending balance deposits in state now) Sep 25 10:30 - 53,1k pending balance deposits still in queue 18:00 - 35k pending balance deposits still in queue (end of graph) ## Graphs ### Lighthouse ![](https://storage.googleapis.com/ethereum-hackmd/upload_213f9e6fdce57a74e1a92db3e2c3d3d2.png) ![](https://storage.googleapis.com/ethereum-hackmd/upload_b7b67ae24c28a89a4a946810d93e628b.png) ^ block processing time for lighthouse-nethermind pair stays at 50ms for unknown reason ![](https://storage.googleapis.com/ethereum-hackmd/upload_e3c19598919386ff2381f1b8c7c9552f.png) ### Lodestar ![](https://storage.googleapis.com/ethereum-hackmd/upload_a27140fc8cd58742e446037efa8b3ac2.png) ![](https://storage.googleapis.com/ethereum-hackmd/upload_b138a7f3c981f11b66ba72e563217d98.png) ^ in lodestar we can see a slight increase for epoch transitions ### Xatu ![](https://storage.googleapis.com/ethereum-hackmd/upload_b32750fd5aca9ebff6ea9d52a0194088.png) ![](https://storage.googleapis.com/ethereum-hackmd/upload_6c5c2f085d674b0f99da46d7d0e3b07f.png) ^ xatu shows an unusual increased block propagation time ![](https://storage.googleapis.com/ethereum-hackmd/upload_fca662fd74ebcea6e1f21707262b6df0.png) ![](https://storage.googleapis.com/ethereum-hackmd/upload_9d6b0e75d454214f6de21c48969dc48d.png) ^ and also increased attestation propagation times during fist fillup period ## Conclusion While our test of filling the queue with 100,000 pending balance deposits did not result in a significant increase in epoch transition processing times, we observed notable performance impacts elsewhere. Specifically, Xatu displayed a significant increase in block and attestation propagation times, particularly during the periods from Sep 23, 20:00 to Sep 24, 10:00, and from Sep 24, 21:30 to Sep 25, 10:30. These timeframes saw unusually high propagation delays, even though there was no significant load on the network during these windows. These findings suggest that, although the epoch transition process remains largely unaffected by large queue sizes, other aspects of network performance, may degrade under similar conditions. Further investigation is needed to determine the root cause of these propagation delays and their implications for the overall network performance.