# Derisking 100MGas The following should act as a braindump of things we need to investigate before giving an okay to 100MGas. All worst cases are collected in geth via statetests. The fuzzers used for finding these are not sophisticated and this analysis should be replicated in a less time constrained manner. ## Hard requirements - Modexp repricing: the worst cases we see are around 6 seconds per block (addressed in Fusaka) - BN256Add repricing: the worst cases are around 4.7 seconds per block - BN256Pairing repricing: the worst cases are around 2+ seconds per block - BN256Mul repricing: worst case around 2+ seconds per block - LOG repricing or changing the devp2p limits for receipts - SSTORE with large trie depths/XEN contract solution ## To investigate - Blocks can be 20% larger than gaslimit (@benaadams) EIP-7778 - Higher tx throughput => higher storage churn => more compaction => more housekeeping work => slower IO (raulk) - State growth outpacing sync speed (fjl): this will likely not be a problem with 100M,but we will have to deal with this at > 100M. - higher bandwidth competition with everything else (CL and EL mempool) - All Opcodes in tight loops - Create & Call chains - Big stacks - Predeployed contracts (deposit, withdrawal queue) - Storage requirements, storage load on datasets - KZG Precompile - BN256Mul precompile - Try to meassure or model the longest execution time for a single block (this involves txs accesing very old random state data, and requires big state). - Check the block filled with the following CL requests (MarekM) - deposits - consolidations - withdrawals/exits - 7702 - Jumpdest analysis - how does mevboost block building timing change? (transcribed by raulk) - Lockstep-sync scenarios, e.g. how long can EL be down and CL is able to catch it up (siladu) - upstream network constraints, missed proposals (j.florentine) - largest possible blocks stuffing call data to saturate gas limit? (raulk) - engine API encoding size / overhead (fjl) - deterioration vs. recovery (and time to recovery) when operating near the limit: I’m not sure we have the mechanisms to recover instead of deteriorate further once a series of large blocks kicked things off-balance, or after some Internet-wide event hits even for a short period (cskiraly) ## Already investigated - ECRecover: not an issue - Blake2f: worst case around 1 sec - Datacopy: not an issue - All BLS precompiles: around 500ms - MStore: memory size around 7MB - Random SStores