# Rumor extension
This doc outlines the intentions and progress to extend Rumor and build new testing with it. -- `@protolambda`
Live progress: [`github.com/protolambda/rumor #refactor branch`](https://github.com/protolambda/rumor/tree/refactor)
#### TLDR of what we already have:
- Commands, shell, logging, scripting
- Gossip, RPC, Discv5 commands to use them each manually. I.e. no data is persisted, but all interactions work
- Actors: manage different hosts in the same process
- Python bindings
- Sessions + remote control: concurrently make actors run things, from a different machine. Sockets, Unix, WS support.
- Used for debugging already, but limited to things like testing snappy. Setting up bigger tests is problematic.
- Used in Stethoscope to script p2p test interactions
#### Extending Rumor to:
- Stay in consensus while experimenting.
- Rumor on testnets was possible before, but validation is now too strict that it requires too much work to stay connected to peers while debugging.
- Reduce boilerplate in tests. Tests should focus on one thing in detail, not everything at the same time
- Ease debugging:
- The shell is more useful when you do not have to pass around big binary blobs of consensus data manually
- Less error prone when not repeating things too much
- Get data:
- attestations, blocks and states are more easily accessible
- Easily poll clients for data, logging the responses, for monitoring/debug purposes
- Attack nets (for testing):
- Share state and other data between p2p actors on the same machine to put large amounts of peers on the network and try to eclipse nodes.
- Better explore it ourselves, than wait until a real attacker executes something like this
- Easily command nodes to do specific (malicious) things. E.g.:
- Command all nodes to connect and sync from a specific peer at the same time.
- Move and sync to chains with manual forkchoice
- Manipulate status messages, metadata exchange, and other RPC to throw off peers.
- Forkchoice tests:
- Set up a simple sync and/or gossip, to serve forkchoice test data, without instrumenting all the clients
- Room to dynamically build blocks/chains/attestations later
- Sync benchmarks
- Fast Go sync implementation + scripted chain = bench sync in a reliable way
- More interactions to analyze data of:
- Logs all events (tagged with time, actor, command id, other custom data), with json-logging support
- Use https://bestchai.bitbucket.io/shiviz/ for nice log visualization of p2p actions
#### What we need to do so:
- [x] Refactor [command handling](https://github.com/protolambda/ask) to keep it manageable + speed up further implementation
- [x] Implement [peer status and metadata tracking](https://github.com/protolambda/rumor/tree/refactor/control/actor/peer)
- [x] Use [ZRNT](https://github.com/protolambda/zrnt) for all the consensus logic
- [x] Data-sharing trees of state (with [ZTYP](https://github.com/protolambda/ztyp/), alike to Remerkleable, but in Go)
- Simplify DB and chain by just keeping every hot state in memory
- Sharing state between actors without mutability problems
- [x] Implement chain tracking (pulled prototype code from [experiment](https://github.com/protolambda/not-a-client))
- [x] Implement forkchoice (pulled experimental prototype code from [experiment](https://github.com/protolambda/zrnt/blob/master/eth2/forkchoice/proto_array.go))
- [ ] Implement global stores and commands (can be simplified to global dict):
- [ ] Attestations
- [x] SignedBeaconBlock
- [x] BeaconState
- [x] Dict of active chain instances (referencing entries in global stores)
- [x] Serve chain by re-using RPC code, iterate blocks from chain, pull from db
- [ ] Later:
- [ ] Re-use gossip code to put blocks and attestations automatically in store
- [ ] Implement some validation (gossip and RPC), tweaked to describe success/failure for individual conditions
- Goal is to catch things like underflows in validation (as critical bugs were found in clients there)
#### Difference from a client:
- Not prioritizing full validation, but rather on triggering behaviors
- DB is inefficient but useful for our purposes
- No validator support
- No Eth1 integration (for now?)
- No API (for now?)
#### New commands
```
done?
[ ] chain
[x] create <chain name> # Start a new chain, starting from some state
[ ] copy <src> <dest> # fork the chain by creating a copy of it
[x] switch <chain name> # Change actor to a different chain
[x] rm <chain name>
[ ] on <chain name>
[ ] hot # Manage hot part (forkchoice)
[ ] view [anchor root]
[ ] # extend with forkchoice options later
[ ] cold # Manage cold part (finalized data)
[ ] view <start> <stop>
[x] block <root> # process a block (from db)
[ ] attestation <root> # process an attestation (from db)
[ ] votes ... # get current latest votes
[ ] head # Manage current head of current chain
[ ] get
[ ] set # fixed point, don't change status otherways
[ ] follow # follow proto-array forkchoice of chain
[x] serve # Serve by ... blocks of the current chain
[x] by-range # range
[x] by-root # root (optionally serve outside blocks of chain view)
[x] sync # Pull blocks from a peer by ... and optionally store and/or process them
[x] by-range # range
[x] by-root # root
TODO: maybe also more advanced sync?
[ ] auto # Auto select sync method, peers, round-robin, batching.
[ ] attestations: # manage a simple DB of attestations, *shared between actors and chains*
[ ] import # import attestation
[ ] gossip # track gossip to import attestations
[ ] subnet <index> # Join an attestation subnet to auto-import attestations from
[ ] global # Join a global net to auto-import attestations aggregates from
[ ] export # Output an attestation to file or log
[ ] stats # Show how many attestations, range, etc.
[ ] get # Get summary of an attestation
[ ] rm # Remove an attestation
[ ] blocks # manage a simple DB of blocks, *shared between actors and chains*
[x] import # Import a block from file or input
[ ] gossip # track gossip to import blocks
[x] export # Output a block to file or log
[x] stats # Show how many blocks, range, etc.
[x] get # Get summary of a block
[x] rm # Remove a block
[x] states # manage a simple DB of beacon states, *shared between actors and chains*
[x] import # Import a state from file or input
[x] export # Output a state to file or log
[x] stats # Show how many states, range, etc.
[x] get # Get summary of a state
[x] rm # Remove a state
[x] peer
[x] info # Extend peer information (enr data, libp2p identify info, metadata, status, etc.)
[x] status
[x] get # Get local status
[x] set # Set local status
[x] req # Request status of peer
[x] poll # Poll connected peers for their status
[x] serve # handle status requests on RPC and respond based on chain
[x] follow # Auto-updating status
[x] metadata
[x] ping # Ping peer for seq nr, update metadata
[x] pong # Serve others with pongs, request them for metadata
[x] get # Get local metadata
[x] set # Set local metadata
[x] req # Get metadata
[x] poll # Poll connected peers for metadata by pinging them on interval
[x] serve # Serve meta data
[x] follow # Auto-updating metadata
[x] peerstore
[x] create # Create a peerstore any actor can use (at the same time also)
[x] switch # Move one actor from one peerstore to the other (hot swap!)
[x] list # Show available peerstores
[x] enr
[x] make # Update to change ENR state of actor, interop with dv5 package better, etc.
```