We have a couple of tasks that a validator is expected to accomplish:
We can consider them from two angles:
See also annotated spec and design rationale.
Assuming validators are rewarded additively (one weighted reward for each task accomplished), we know the feasible set for the coefficients associated to each task. For instance, we might expect that any reward r>0 is sufficient to incentivise validators to vote correctly on the FFG data (source/target). For others, the feasible set is more complex, e.g., the bit and chunk challenge games.
Yet we know that some couplings/interactions exist between these tasks:
Thus, when attributed linearly, the rewards for these tasks can fail. For instance, one might want to condition the inclusion delay reward on the correctness of the head to avoid the deviation presented above. Additionally, a validator can still get a pretty good return while not performing all its functions.
More generally, with the increasing number of tasks that a validator is expected to perform, we are looking for a principled approach to setting the rewards.
If we look for an optimal way to do so, we need to define what we are actually trying to optimise for. Ideally, we want the punishments to be gradual, with the order:
An optimisation approach would account for the levels of punishment required by each type of validator.
A recent report stressed the low yields for DIY, hardware-based validators (arguably the biggest contributors to overall decentralisation) at low values of ETH price. Meanwhile, validator infrastructures with larger economies of scale (e.g., exchanges running several thousand validators) are more robust to such macroeconomic factors. The recommendations of the report include:
BASE_REWARD_FACTOR
parameter.BASE_REWARD_FACTOR
. The value of BASE_REWARD_FACTOR
could also be tied to e.g., the length of the exit queue, triggering an increase in reward size when many validators want to exit, or tying its value to the total ETH staked.We could attempt a model of the different validator populations following evolutionary game theory tools. This could help stress the centralising forces as well as analyse long term distributions of validators among the various populations. It also opens the door to simulations.