# Security Considerations and Spec Changes for a `MAX_EFFECTIVE_BALANCE` Increase *Joint document with Mike Neuder. Special thanks to Roberto Saltini for review and for further analysis of the security of committees, linked in the document* This document follows up on [*a modest proposal*](https://notes.ethereum.org/@mikeneuder/Hkyw_cDE2) by exploring the security considerations of increasing the `MAX_EFFECTIVE_BALANCE` (`MaxEB`). In particular, we explore the potential effects on Ethereum's committee structures, activations, exits, and withdrawals; we suggest potential spec changes when they are required. See [this pr](https://github.com/michaelneuder/consensus-specs/pull/3/files) for a full diff view of the spec changes proposed. ## Security of committees ### Committees in Ethereum Ethereum operates three types of committees: slot committees (or attestation committees), subcommittees (what the spec calls committees) and sync committees. - **Slot Committees**: These are groups of validators selected to attest in a given slot. The selection partitions the validator set into 32 groups without considering balances. - **Subcommittees**: Each slot committee is further partitioned in subcommittees. These subcommittees were meant to be used as shard committees, so they are formed to ensure that there are always at least `TARGET_COMMITTEE_SIZE = 128` validators per subcommittee and no more than `MAX_COMMITTEES_PER_SLOT = 64` subcommittees in total. If the validator count is such that 64 full subcommittees can be formed, the number of validators in each grows. If not, the number of subcommittees shrinks. Presently, subcommittees only facilitate attestation aggregation at the p2p layer. Each subcommittee corresponds to an attestation subnet, and the subnets are used as the first layer of aggregation to produce the aggregate attestations that are ultimately gossiped globally. - **Sync committee**: A committee of size `SYNC_COMMITTEE_SIZE = 512` chosen every ~1 day that is responsible for signing block headers to be used in the light client protocol. We now go through these kinds of committees and analyze how they are affected by [the proposal](https://notes.ethereum.org/@mikeneuder/Hkyw_cDE2). ### Slot committees It is important that a majority (>50%) of a slot committee's weight (validator balance) remains under honest control to ensure the proper functioning of the fork-choice rule. That said, we do not require a vanishingly small probability of failure (what would be called negligible for a cryptographic protocol) because an occasional failure does not have catastrophic consequences. If a single attestation committee is majority adversarial, the most an attacker can do is execute a local reorg. #### `MaxEB` Increase and Stake Consolidation The proposal to increase the `MaxEB` by a factor of $k$ (e.g., from 32 to 2048 ETH is a factor of $k=64$) raises the question of the probability of slot committees being honestly controlled. The concern arises from the potential for stake consolidation, where many validators merge their stakes into a smaller set of validators with higher effective balances, which is precisely what the proposal aims to enable. Consolidation increases the variance in the weight distribution across committees, because the weight of consolidated validators is more concentrated. Essentially, the validator set becomes more "discrete". In the extreme case, we could have all validators consolidate to the new `MaxEB`, so that the validator set size reduces by a factor of $k$, weakening the effect of the Law of Large Numbers on the distribution of weight across committees. In analyzing the security of committees after such an increase, we cannot predict the final weight distribution, and thus we can't give exact estimates for the failure probability. On the other hand, we (informally) argue here that the worst case scenario is full consolidation, where all validators end up with effective balance `EB = MaxEB`. It follows that the failure probabilities of this particular configuration, which effectively shrinks the validator set size by $k$, are an upper bound on the true failure probabilities. #### Why Full Consolidation is the Worst-Case Scenario We think of committee safety as a game between two parties, one controlling the adversarial validators and one the honest validators, and consider their respective strategies. The adversary benefits from consolidating its validators, as it increases their probability of at least *some* committee being majority dishonest. In other words, increasing the variance of adversarial validator balances and the distribution of the adversarial weight over committees, makes it more likely that there's a *positive deviation from the expected adversarial weight in a committee*. Of course it also makes negative deviations more likely, but the adversary does not care about that, as it only seeks to increase its chances of controlling at least one committee. Conversely, honest validators benefit from spreading their weight evenly across committees, as this minimizes the risk of negative deviations from their expected weight, which make the adversary's job easier. Honest-stake consolidation maximizes the risk of such deviations. Therefore the worst-case scenario occurs when all validators consolidate their stakes, as this is when the adversary appears to be playing its best strategy, while the honest validators their worst. To further corroborate this argument, [here](https://hackmd.io/@0g8QuqEeQBe45CC8toURGw/HylpAVzIH2) is a more rigorous analysis which, under the assumption that honest stake and adversarial stake are respectively evenly distributed, shows that a Chernoff-Hoeffding upper bound for the failure probability is maximised when all validators have the maximum balance `MaxEB`. #### Safety Margins Given full consolidation as the worst case, it is easy to analyze the failure probability by repeating the usual probabilistic analysis for a smaller validator set. We do so below, using a binomial approximation, and find that the safety margins remain high, i.e. the probability of an adversarial takeover of a committee is still quite low. If desired, a minimum required safety probability may be selected, leading to a `MaxEB` value which satisfies this constraint even in the worst case. <!-- ![](https://storage.googleapis.com/ethereum-hackmd/upload_5257db6cc0f2ac51fdcb4c285cc1db2a.png) --> <!-- We can compare this to the failure probability with validator set size $2^{19}$, roughly the same as today. --> <!-- ![](https://storage.googleapis.com/ethereum-hackmd/upload_70f8d6314cc72e184ec63e188b7b0829.png) --> ![](https://storage.googleapis.com/ethereum-hackmd/upload_db82d00aa3e609025da696f0926cccc7.png) > **Figure 1** – Probability that a committee is controlled (>50%) by a malicious validator as a function of the honest validator proportion. Starting with today's validator set size of 575,000, we also plot the `2x`, `4x`, and `8x` reduction cases. We see that even in the `8x` reduction case, as long as there are 55% honest validators (well below the 66.6% supermajority needed for finality), an attacker with 45% stake will only control a single committee slightly more than once per-year. ### Subcommittees In the original Ethereum sharding plan, subcommittees were intended to be secure, with extremely high probability of 2/3 of the weight being controlled by honest validators. However, this is no longer necessary with sharding off the table. We now only need to worry about the role of subcommittees in attestation aggregation, i.e., about their correspondence with attestation subnets. Here, we have one main security consideration: we require that in every subnet that contains honest validators there exists at least one honest *aggregator*, as this ensures that the honest attestations from that subnet make it to the global gossip. Once the attestations are gossiped, they influence the fork-choice rule, are included in blocks, earn attestation rewards, and lead to justification of epochs. Currently, aggregators are elected through a VRF lottery, whose parameters are set up so that the expected number of aggregators is `TARGET_AGGREGATORS_PER_COMMITTEE = 16`. In a post-consolidation world, it is theoretically possible that subcommittees could still be quite large while having barely any honest validators, if honest validators consolidate but adversarial ones do not. This would clearly make the current VRF lottery insecure, because it could well be the case that none of the few honest validators are elected. One alternative is to treat a validator with effective balance $w$ as controlling $\lfloor \frac{w}{32} \rfloor$ virtual validators, so that they get chosen as an aggregator if at least one of its virtual validators is chosen. In practice, that means that such a validator is chosen with probability $1-(1-q)^{\lfloor w/32 \rfloor}$, where $q = 16\frac{32}{W}$ and $W$ is the total weight of the committee. $q$ is the probability with which we would choose a validator to be an aggregator in a world in which all validators have balance 32, i.e. $\frac{16}{n}$, where $n$ is the subcommittee size, so as to target 16 aggregators per subcommittee. The expression above then incorporates the probability $(1-q)^{\lfloor w/32 \rfloor}$ that none of the ${\lfloor w/32 \rfloor}$ virtual validators are chosen. By doing the election in this way, we have two good properties: - The total number of aggregators does not increase, in the sense that the probability of having $m$ aggregators elected with this scheme is bounded by the probability of having $m$ elected today, for an equal amount of weight in the subcommittee. In other words, consolidation of the weight in a subcommittee does not negatively affect the distribution of the total number of aggregators. This is simply because a consolidated validator, representing multiple virtual validators, still just counts as one aggregator. This is a desirable property because ultimately aggregates are what goes in the global gossip, so their number ultimately determines the gossip load for attestations. - The probability of having at least one honest aggregator is no worse, because the number of honest virtual validators in a post-consolidation world is (up to rounding error from the floor function) the same as the number of actual honest validators in a pre-consolidation world, and each virtual validator has the same or better chance of being represented by an aggregator, compared to its actual counterpart. For example, in the scenario described initially with a few large honest validators, each of them would have nearly probability 1 of being an aggregator, because they each represent 64 virtual validators, and $(1-q)^{64} \approx 0$ . We now show the necessary change in the spec. ```diff= def is_aggregator(state: BeaconState, slot: Slot, index: CommitteeIndex, slot_signature: BLSSignature) -> bool: committee = get_beacon_committee(state, slot, index) + number_virtual_validators = validator.effective_balance // MIN_ACTIVATION_BALANCE + committee_balance = get_total_balance(state, set(committee)) + denominator = committee_balance ** number_virtual_validators + numerator = denominator - (committee_balance - TARGET_AGGREGATORS_PER_COMMITTEE * MIN_ACTIVATION_BALANCE) ** number_virtual_validators + modulo = denominator // numerator - modulo = max(1, len(committee) // TARGET_AGGREGATORS_PER_COMMITTEE) return bytes_to_uint64(hash(slot_signature)[0:8]) % modulo == 0 ``` ### Sync committee The annotated Altair spec describes the [`get_next_sync_committee_indices` function](https://github.com/ethereum/annotated-spec/blob/master/altair/beacon-chain.md#get_sync_committee_indices) in this way: > This function works as follows: > - Compute the active validator indices at the next epoch. > - Walk through the shuffled indices based on the seed at the next epoch (that is, go through `compute_shuffled_index(0)`, `compute_shuffled_index(1)`, etc.). For each index, accept that validator with probability `B/32` where `B` is their effective balance. > > Note that the probability of being accepted is proportional to your balance. Because of this, there is no need for participation rewards to be proportional to balance, and there is also no need for the light client fork choice rule to care about the balances of sync committee members. Due to the sampling being already done by effective balance, the `MaxEB` increase does not require modifications to the sync protocol. The sync committee selection will now simply have an acceptance probability of `B/MaxEB`, rather than `B/32`, without any spec changes: ```python if effective_balance * MAX_RANDOM_BYTE >= MAX_EFFECTIVE_BALANCE * random_byte: sync_committee_indices.append(candidate_index) ``` [Note that this has the consequence of making acceptance less likely for validators which lower balance, lengthening the computation of sync committee indices somewhat.] Since we maintain weight-based selection probability, no changes are needed in the verification either. [`process_light_client_update`](https://github.com/ethereum/annotated-spec/blob/master/altair/sync-protocol.md#process_light_client_update) can still only check that ``sum(update.sync_committee_bits) * 3 >= len(update.sync_committee_bits) * 2``, i.e. that 2/3 of the sync committee validators have signed the update, without needing to know about their weights. ## Churn invariants ### Rate limits today Changes to the active validator set, either via new activations or via exits, are carefully [rate limited](https://github.com/ethereum/annotated-spec/blob/master/phase0/beacon-chain.md#misc), in order to avoid a too quick degradation of the quorum-intersection properties which economic finality depends on. In other words, economic finality guarantees that $(1/3-\epsilon)$ of the stake is slashable for two conflicting finalized checkpoints whose epochs differ by $N$, as long as our validity rules guarantee that the total active stake cannot change by more than $\epsilon$ during $N$ epochs. We then carefully choose the rate limiting parameters so as to ensure that there cannot be a large inflow or outflow of *weight* (we do not care about number of validators here, only weight!) in a short amount of time, and these parameters define the concrete economic security guarantees that we have, and what the [weak subjectivity](/mvCzUnswQg6BP0PVcRLELA) period is. Concretely, the current parameters regulating the *churn limit*, i.e. the maximum rate of activations and exits per epoch, are `MIN_PER_EPOCH_CHURN_LIMIT = 4` and `CHURN_LIMIT_QUOTIENT = 65,536 (2**16)`. They both contribute to the [`get_validator_churn_limit`](https://github.com/ethereum/annotated-spec/blob/master/phase0/beacon-chain.md#get_validator_churn_limit) function, which we show below. The first one simply says that there can always be at least 4 activations and exits per epoch. More important is the second one, which regulates the maximum churn allowed in one epoch, by setting it to $1/65536$ of the validator set (whenever this is at least 4), or approximately 0.0015%. ```python= def get_validator_churn_limit(state: BeaconState) -> uint64: """ Return the validator churn limit for the current epoch. """ active_validator_indices = get_active_validator_indices(state, get_current_epoch(state)) return max(MIN_PER_EPOCH_CHURN_LIMIT, uint64(len(active_validator_indices)) // CHURN_LIMIT_QUOTIENT) ``` ### Changes In order to keep the same security guarantees on this front after increasing the `MaxEB`, we need to maintain the property that at most $1/65536$ of the *active weight* can enter or exit the validator set in an epoch. This is not achieved by naively applying the existing rate limits, because these target the inflow and outlow of *validators, rather than weight*, which is fine in the current protocol because of the rough equivalence between the two concepts. Once balances are highly variable, this rough equivalence breaks down and we need to directly rate limit the flow of weight, or otherwise suffer a huge security reduction. For example, $1/65536$ of the validators currently means about 8 validators, and 8 validators with balance 2048 ETH constitute almost $1/1000$ of the total stake, so that the 64x increase in `MaxEB` allows for a flow of weight which is 64x faster than desired. This would in practice mean that 330 epochs, roughly 35 hours, are all that is required for 33% of the total weight to exit. To maintain the current security properties, we first modify `get_validator_churn_limit` to be weight denominated, returning a `Gwei` amount representing the maximum stake which can enter of exit per epoch: ```diff= - def get_validator_churn_limit(state: BeaconState) -> uint64: + def get_validator_churn_limit(state: BeaconState) -> Gwei: """ Return the validator churn limit for the current epoch. """ - active_validator_indices = get_active_validator_indices(state, get_current_epoch(state)) - return max(MIN_PER_EPOCH_CHURN_LIMIT, uint64(len(active_validator_indices)) // CHURN_LIMIT_QUOTIENT) + return max(MIN_PER_EPOCH_CHURN_LIMIT * MIN_ACTIVATION_BALANCE, get_total_active_balance(state) // CHURN_LIMIT_QUOTIENT) ``` We also modify activations, exits to properly implement weight-based rate limiting. For this, we need two additional field in the `BeaconState`, which help us properly account for situations in which the balance of an activating or exiting validator is larger than the remanining churn: ```diff= class BeaconState(Container): ... # Registry validators: List[Validator, VALIDATOR_REGISTRY_LIMIT] balances: List[Gwei, VALIDATOR_REGISTRY_LIMIT] + activation_validator_balance: Gwei + exit_queue_churn: Gwei ... ``` #### Activations Activations are processed during an epoch transition by the [`process_registry_updates`](https://github.com/ethereum/annotated-spec/blob/master/phase0/beacon-chain.md#registry-updates) function. We mantain the current way of forming the activation queue, and simply modify the way we process it to be weight-sensitive. `activation_validator_balance` keeps track of the yet-to-be-processed balance of the validator at the front of the activation queue, so that even large validators can be processed over multiple epochs. ```diff= def process_registry_updates(state: BeaconState) -> None: # Process activation eligibility and ejections for index, validator in enumerate(state.validators): if is_eligible_for_activation_queue(validator): validator.activation_eligibility_epoch = get_current_epoch(state) + 1 if is_active_validator(validator, get_current_epoch(state)) and validator.effective_balance <= EJECTION_BALANCE: initiate_validator_exit(state, ValidatorIndex(index)) # Queue validators eligible for activation and not yet dequeued for activation activation_queue = sorted([ index for index, validator in enumerate(state.validators) if is_eligible_for_activation(state, validator) # Order by the sequence of activation_eligibility_epoch setting and then index ], key=lambda index: (state.validators[index].activation_eligibility_epoch, index)) # Dequeue validators for activation up to churn limit - for index in activation_queue[:get_validator_churn_limit(state)]: - validator = state.validators[index] - validator.activation_epoch = compute_activation_exit_epoch(get_current_epoch(state)) + activation_balance_to_consume = get_validator_churn_limit(state) + for index in activation_queue: + validator = state.validators[index] + # Validator can now be activated + if state.activation_validator_balance + activation_balance_to_consume >= validator.effective_balance: + activation_balance_to_consume -= (validator.effective_balance - state.activation_validator_balance) + state.activation_validator_balance = Gwei(0) + validator.activation_epoch = compute_activation_exit_epoch(get_current_epoch(state)) + else: + state.activation_validator_balance += activation_balance_to_consume + break ``` #### Exits We modify the [`initiate_validator_exit`](https://github.com/ethereum/annotated-spec/blob/master/phase0/beacon-chain.md#initiate_validator_exit) function to be weight-sensitive. `exit_queue_churn` accumulates the effective balances of validators whose `exit_epoch` is the latest one. When processing an exit, we check whether `validator.effective_balance` fits within the current `exit_queue_epoch`, i.e. whether `state.exit_queue_churn + validator.effective_balance <= per_epoch_churn_limit`, in which case we simply increment and `state.exit_queue_churn` accordingly, by `validator.effective_balance`, and assign the current `exit_queue_epoch` to the validator. If not, the validator occupies at least this epoch, and possibly more if its effective balance is greater than `per_epoch_churn_limit`. In this case, we keep increasing the `exit_queue_epoch` and decreasing `exit_balance_to_consume` by `per_epoch_churn_limit` until `exit_balance_to_consume < per_epoch_churn_limit`, i.e., the remaining balance to process fits in one epoch. This is then the assigned exit epoch, while `state.exit_queue_churn` is set to the remaining balance, reflecting the leftover churn caused by the validator in its exit epoch. For example, say `churn_limit` is 128 ETH, `state.exit_queue_churn` is currently 64 ETH, and we process the exit of a validator with effective balance 256 ETH. The validator's balance fills up the current `exit_queue_epoch`, say epoch N, and the following one as well. Therefore, the validator's `exit_epoch` is set to N+2, and `state.exit_queue_churn` to 64 ETH, which is what is left over after filling up the remainder of epoch N+1 and the remainder of epoch N. ```diff= def initiate_validator_exit(state: BeaconState, index: ValidatorIndex) -> None: ... # Compute exit queue epoch exit_epochs = [v.exit_epoch for v in state.validators if v.exit_epoch != FAR_FUTURE_EPOCH] exit_queue_epoch = max(exit_epochs + [compute_activation_exit_epoch(get_current_epoch(state))]) - exit_queue_churn = len([v for v in state.validators if v.exit_epoch == exit_queue_epoch]) - if exit_queue_churn >= get_validator_churn_limit(state): - exit_queue_epoch += Epoch(1) + exit_balance_to_consume = validator.effective_balance + per_epoch_churn_limit = get_validator_churn_limit(state) + if state.exit_queue_churn + exit_balance_to_consume <= per_epoch_churn_limit: + state.exit_queue_churn += exit_balance_to_consume + else: # Exit balance rolls over to subsequent epoch(s) + exit_balance_to_consume -= (per_epoch_churn_limit - state.exit_queue_churn) + exit_queue_epoch += Epoch(1) + while exit_balance_to_consume >= per_epoch_churn_limit: + exit_balance_to_consume -= per_epoch_churn_limit + exit_queue_epoch += Epoch(1) + state.exit_queue_churn = exit_balance_to_consume # Set validator exit epoch and withdrawable epoch validator.exit_epoch = exit_queue_epoch validator.withdrawable_epoch = Epoch(validator.exit_epoch + MIN_VALIDATOR_WITHDRAWABILITY_DELAY) ``` #### Balance top-ups The [annotated phase0 spec](https://github.com/ethereum/annotated-spec/blob/master/phase0/beacon-chain.md#deposits) comments on the interaction of top-ups with the activation queue: > Note: yes, balance top-ups do kinda get around activation queues, but note that for an attacker to benefit from this, they need to have already lost the ETH that is being topped up [since depositing requires 32 ETH and 32 ETH is the maximum effective balance], so it is not an attack vector With "balance top-ups do kinda get around activation queue", the document refers to the fact that top-ups are capped at `MAX_DEPOSITS = 16` *per block*, much higher than the rate limit on activations. Still, this is safe because the minimum activation balance and the maximum effective balance correspond, so that you cannot use a top-up to increase your balance beyond what you have already lost. For example, this preserves the property that a double finalization has a cost of attack equivalent to slashing 1/3 of the stake, because any ETH introduced in the active stake through top-ups and used in the attack would have already been lost in the past, so it doesn't matter whether or not it can be slashed. If we increase `MaxEB` but keep the minimum activation balance fixed at 32 ETH, we break this property, making top-ups an effective way to get around the activation queue. To prevent this, the simplest workaround is to cap top-ups at 32 ETH. This preserves some of the functionality, in particular for validators which do not make use of `MaxEB` increase and still keep their effective balance around 32 ETH. On the other hand, it does not allow validators with higher effective balance to replenish it, were they to lose some of it, nor does it allow topping up to increase one's effective balance. Validators which wish to do so would have to withdraw and activate a new validator. ```diff= def apply_deposit(state: BeaconState, pubkey: BLSPubkey, withdrawal_credentials: Bytes32, amount: uint64, signature: BLSSignature) -> None: validator_pubkeys = [v.pubkey for v in state.validators] if pubkey not in validator_pubkeys: ... else: - # Increase balance by deposit amount + # Increase balance by deposit amount, up to MIN_ACTIVATION_BALANCE index = ValidatorIndex(validator_pubkeys.index(pubkey)) - increase_balance(state, index, amount) + increase_balance(state, index, min(amount, MIN_ACTIVATION_BALANCE - state.balances[index])) ``` ## Withdrawals In the current spec, there are both partial and full withdrawals. Partial withdrawals happen automatically and remove any balance in excess of 32 ETH. Full withdrawals happen when a validator is exiting the protocol all together. We proposer adding a new withdrawal type that has the prefix `0x02`: ```diff= BLS_WITHDRAWAL_PREFIX = Bytes1('0x00') ETH1_ADDRESS_WITHDRAWAL_PREFIX = Bytes1('0x01') + COMPOUNDING_WITHDRAWAL_PREFIX = Bytes1('0x02') ``` If validators set this prefix on their withdrawal credential, the automatic partial withdrawal will only kick in when their effective balance is the `MaxEB`. Validators with the `ETH1_ADDRESS_WITHDRAWAL_PREFIX` still have the same behavior of having any balance above 32 ETH automatically withdrawn. The modified `is_partially_withdrawable_validator` is ```diff= def is_partially_withdrawable_validator(validator: Validator, balance: Gwei) -> bool: """ Check if ``validator`` is partially withdrawable. """ - has_max_effective_balance = validator.effective_balance == MAX_EFFECTIVE_BALANCE - has_excess_balance = balance > MAX_EFFECTIVE_BALANCE - return has_eth1_withdrawal_credential(validator) and has_max_effective_balance and has_excess_balance + return get_validator_excess_balance(validator, balance) > 0 ``` This depends on a new function called `get_validator_excess_balance`, which is defined below ```diff= + def get_validator_excess_balance(validator: Validator, balance: Gwei) -> Gwei: + """ + Get excess balance for partial withdrawals for ``validator``. + """ + if has_compounding_withdrawal_credential(validator) and balance > MAX_EFFECTIVE_BALANCE: + return balance - MAX_EFFECTIVE_BALANCE + elif has_eth1_withdrawal_credential(validator) and balance > MIN_ACTIVATION_BALANCE: + return balance - MIN_ACTIVATION_BALANCE + return Gwei(0) ``` <!-- We propose changing the semantics of a *partial withdrawal* to allow validators to set different *ceilings* for their effective balance using their withdrawal credential. We introduce a new withdrawal prefix of `COMPOUNDING_WITHDRAWAL_PREFIX=0x02` that a validator can upgrade to. This is the exact same process that validators used to upgrade their BLS to Eth1 credential in [`process_bls_to_execution_change`](https://github.com/ethereum/consensus-specs/blob/dev/specs/capella/beacon-chain.md#new-process_bls_to_execution_change). The validators will call a function calleed `process_execution_to_compounding_change`, which will update their withdrawal credential to ```python= validator.withdrawal_credentials = ( COMPOUNDING_WITHDRAWAL_PREFIX # byte 0 + ceiling_bytes # bytes [1,2] + b'\x00' * 9 # bytes [3,11] + address # bytes [12,31] ) ``` We pack the `ceiling_bytes` (a 2-byte word cast from a uint16) into the bytes preceding the actual withdrawal address. With this in place, we introduce a new function called `get_balance_ceiling`, ```python= def get_balance_ceiling(validator: Validator) -> Gwei: """ Return the balance ceiling for a validator. """ if not has_compounding_withdrawal_credential(validator): return MIN_ACTIVATION_BALANCE # With compounding credential bytes [1-2] are the ceiling in ETH. return bytes_to_uint16(validator.withdrawal_credential[1:3]) * EFFECTIVE_BALANCE_INCREMENT ``` This function returns the balance ceiling set for the validator. If they are not using a compound credential, then we return the 32 ETH cap that was there previously, which results in no changes in the case of the validator not taking any action. If they choose to upgrade their credential, they can set the ceiling to a power of 2 value of `ETH` greater than 32. The consolidation path for stakers is 1. choose a validator to consolidate to 2. change that validator's withdrawal credential to encode a ceiling value in `{64, 128, 256, 512, 1028, 2048}`. 3. add balance to their validator. With this all in place, the new `is_partially_withdrawable_validator` function can be modified slightly to ```python= def is_partially_withdrawable_validator(validator: Validator, balance: Gwei) -> bool: """ Check if ``validator`` is partially withdrawable. """ if not has_withdrawable_credential(validator): return false ceiling = get_balance_ceiling(validator) has_ceiling_effective_balance = validator.effective_balance == ceiling has_excess_balance = balance > ceiling return has_ceiling_effective_balance and has_excess_balance ``` Resultingly, the `get_expected_withdrawals` function can be modified to only withdraw based on the new ceiling. ```python= def get_expected_withdrawals(state: BeaconState) -> Sequence[Withdrawal]: ... for _ in range(bound): if is_fully_withdrawable_validator(validator, balance, epoch): ... elif is_partially_withdrawable_validator(validator, balance): ceiling = get_balance_ceiling(validator) # modified withdrawals.append(Withdrawal( index=withdrawal_index, validator_index=validator_index, address=ExecutionAddress(validator.withdrawal_credentials[12:]), amount=balance - ceiling, # modified )) withdrawal_index += WithdrawalIndex(1) ... ``` --> <!-- <-- ## Slashing We add the constant `MIN_PROPOSER_SLASHING = Gwei(10**9)`, or 1 ETH, which is the minimum slashing amount applied to instances of proposer slashing. Other slashing instances are still punished proportionally to effective balance. TO DO: this can be used to avoid a heavier attestation slashing, by getting slashed for proposal equivocation first, because `is_slashable_validator` is called before `slash_validator`, and a second slashing isn't processed if `validator.slashed` is already `True`. Would to replace `validator.slashed` with a 2-bit flag... ```python= def slash_validator(state: BeaconState, slashed_index: ValidatorIndex, whistleblower_index: ValidatorIndex=None) -> None: """ Slash the validator with index ``slashed_index``. """ epoch = get_current_epoch(state) initiate_validator_exit(state, slashed_index) validator = state.validators[slashed_index] validator.slashed = True validator.withdrawable_epoch = max(validator.withdrawable_epoch, Epoch(epoch + EPOCHS_PER_SLASHINGS_VECTOR)) state.slashings[epoch % EPOCHS_PER_SLASHINGS_VECTOR] += validator.effective_balance min_slashing_amount = MIN_PROPOSAL_SLASHING if is_proposal_slashing else validator.effective_balance // MIN_SLASHING_PENALTY_QUOTIENT decrease_balance(state, slashed_index, min_slashing_amount) # Apply proposer and whistleblower rewards proposer_index = get_beacon_proposer_index(state) if whistleblower_index is None: whistleblower_index = proposer_index whistleblower_reward = Gwei(validator.effective_balance // WHISTLEBLOWER_REWARD_QUOTIENT) proposer_reward = Gwei(whistleblower_reward // PROPOSER_REWARD_QUOTIENT) increase_balance(state, proposer_index, proposer_reward) increase_balance(state, whistleblower_index, Gwei(whistleblower_reward - proposer_reward)) ``` ```diff= def slash_validator(state: BeaconState, slashed_index: ValidatorIndex, whistleblower_index: ValidatorIndex=None, is_proposer_slashing: bool=False) -> None: """ Slash the validator with index ``slashed_index``. """ epoch = get_current_epoch(state) initiate_validator_exit(state, slashed_index) validator = state.validators[slashed_index] validator.slashed = True validator.withdrawable_epoch = max(validator.withdrawable_epoch, Epoch(epoch + EPOCHS_PER_SLASHINGS_VECTOR)) state.slashings[epoch % EPOCHS_PER_SLASHINGS_VECTOR] += validator.effective_balance + min_slashing_amount = MIN_PROPOSER_SLASHING if is_proposer_slashing else validator.effective_balance // MIN_SLASHING_PENALTY_QUOTIENT + decrease_balance(state, slashed_index, min_slashing_amount) - decrease_balance(state, slashed_index, validator.effective_balance // MIN_SLASHING_PENALTY_QUOTIENT) # Apply proposer and whistleblower rewards proposer_index = get_beacon_proposer_index(state) if whistleblower_index is None: whistleblower_index = proposer_index whistleblower_reward = Gwei(validator.effective_balance // WHISTLEBLOWER_REWARD_QUOTIENT) proposer_reward = Gwei(whistleblower_reward // PROPOSER_REWARD_QUOTIENT) increase_balance(state, proposer_index, proposer_reward) increase_balance(state, whistleblower_index, Gwei(whistleblower_reward - proposer_reward)) ``` ```diff= def process_proposer_slashing(state: BeaconState, proposer_slashing: ProposerSlashing) -> None: header_1 = proposer_slashing.signed_header_1.message header_2 = proposer_slashing.signed_header_2.message # Verify header slots match assert header_1.slot == header_2.slot # Verify header proposer indices match assert header_1.proposer_index == header_2.proposer_index # Verify the headers are different assert header_1 != header_2 # Verify the proposer is slashable proposer = state.validators[header_1.proposer_index] assert is_slashable_validator(proposer, get_current_epoch(state)) # Verify signatures for signed_header in (proposer_slashing.signed_header_1, proposer_slashing.signed_header_2): domain = get_domain(state, DOMAIN_BEACON_PROPOSER, compute_epoch_at_slot(signed_header.message.slot)) signing_root = compute_signing_root(signed_header.message, domain) assert bls.Verify(proposer.pubkey, signing_root, signed_header.signature) - slash_validator(state, header_1.proposer_index) + slash_validator(state, header_1.proposer_index, is_proposer_slashing=True) ``` --> -->