# Testnet testing tool:
A list of events/things that we'd like to assert have happened in a devnet or testing environment:
- [x] Stability (no funny business. just a cleanly run network)
- [x] 100% Finality
- [x] > X% attestations per epoch (98%?)
- [x] fewer than X re-orgs per epoch (probably like 0.1, so something like only one reorg every 10 epochs)
- [x] No re-orgs greater than depth 1
- [x] Every client combo proposes block
- [ ] Every client combo proposes mev block
> [name=pk910][color=#db3fff]check via mev-relay api or EL block extra data?
- [x] Every cl client proposes a block with
- [x] EL operations
- [x] normal TXs
- [x] Blob TXs
- [x] CL operations:
- [x] deposits
- [x] attester slashings
- [x] proposer slashings
- [x] voluntary exits
- [x] attestations
- [x] 0x01 credential changes
- [x] withdrawals
- [x] Every client combo creates an exit message that gets gossiped to every other combo
> [name=pk910][color=#db3fff]Do we want to test exit message creation on clients here or just the propagation from each client?\
So, should the test call some client specific exit command or just submit a pre-generated exit message via each client?
- [ ] Deposit voting checks
- [ ] Slashing for sending the same blob with different head
> [name=pk910][color=#db3fff]Is there already some code that builds such slashable blobs? (ie. blobber?)
- [ ] Slashing for double attestation with every client combo
> [name=pk910][color=#db3fff]Is there already some code that builds such slashable attestations? (I think I've seen some python code somewhere)
- [x] Messages/operations in blocks during non finality
- [x] EL operations
- [x] normal TXs
- [x] Blob TXs
- [x] CL operations:
- [x] deposits
- [x] attester slashings
- [x] proposer slashings
- [x] voluntary exits
- [x] attestations
- [x] 0x01 credential changes
- [x] withdrawals
- [x] Basic tx
- [x] Basic contract deployment
- [ ] Call every opcode once
- [x] Basic blobs
- [ ] [Tim suggestion] Spam blocks & blobs to see respective base fees rise/fall as expected
More complex tests:
- [ ] assert that snap sync works and snap sync serving works
- [ ] set offline (< 1/3), then can rejoin/sync
- [ ] set offline (> 1/3), (non-finality) then rejoin/sync
- [ ] [probably just in controlled environments like kurtosis]:
- [ ] network partition (no side has 2/3), non-finality, rejoin, resolve to one fork
- [ ] network partition (one side has >2/3), rejoin, resolve to finalized fork
- [ ] during partition tests:
- [ ] Blob submitted to fork1, reorg fork2 onto fork1, blob should be present in nodes on fork2
- [ ] TX submitted to fork1, reorg fork2 onto fork1, blob should be present in nodes on fork2