In this document we present the evolution of the proof-of-stake consensus protocol of Ethereum, called Gasper, aiming at a self-contained reference for future study and research.
::: spoiler ::: info We include in spoiler boxes of this kind the implementation of the most relevant theoretical parts. Observe that the code showed for the implementation follows the consensus pyspec [https://github.com/ethereum/consensus-specs], and is presented using Simple Serialize (SSZ), a serialization and merkleization standard created specifically for Ethereum consensus. This specification serves as a reference for consensus layer devs and is also used for creating the test case vectors for client teams. A tutorial on how to run the Ethereum consensus pyspec can be found here, and an updated annotated specification can be found here. :::
Since this document only aims at grouping together all the most relevant resources concerning the proof-of-stake consensus protocol of Ethereum, some parts are just imported as they are in the original source. If this is the case, the text will be in italic type and the original reference is given.
Special thanks to Ittai Abraham, Aditya Asgaonkar, Christian Cachin, Francesco D'Amato, Tyler Holmes, Joachim Neu, and anonymous reviewers for feedback and helpful discussions.
The proof-of-stake consensus protocol of Ethereum evolved during the years. This is because attacks were found that undermined its properties and functioning. As a response to that, solutions were developed to cope with the problems.
In order to better understand the current implemented protocol, we start by presenting the original version of it, by Buterin et al., highlighting its underlying components and properties. Then, we present the most relevant attacks and/or problems that have been discovered so far, and discuss solutions to them.
First, let us introduce the system model that will be used from now on.
We consider a system of
Validator
class Validator(Container):
pubkey: BLSPubkey
withdrawal_credentials: Bytes32 # Commitment to pubkey for withdrawals
effective_balance: Gwei # Balance at stake
slashed: boolean
# Status epochs
activation_eligibility_epoch: Epoch # When criteria for activation were met
activation_epoch: Epoch
exit_epoch: Epoch
withdrawable_epoch: Epoch # When validator can withdraw funds
This object contains all the information relating to a particular validator
pubkey
: public key for signing;
withdrawal_credentials
: a hash of the pubkey that will be used to withdraw;
effective_balance
: the balance of the validator that is used for all calculations;
slashed
: it says if the validator has been slashed (this term will be introduced few sections below);
activation_eligibility_epoch
: when the validator became eligible for activation, i.e., when it became eligible to become an active validator;
activation_epoch
: when the validator was activated;
exit_epoch
: when the validator exited from
withdrawable_epoch
: when the validator became eligible to withdraw their balance;
The precise epochs are kept around in order to calculate the current active validator set and the historical active validator set.
Observe that validators have two private/public key pairs: pubkey
, used for signing protocol messages, and a separate withdrawal key. withdrawal_credentials
is obtained from the validator's withdrawal key and it allows a validator to prove it owns the funds enabling it to withdraw them.
Validators are assigned a protocol to follow.
A protocol for
Each validator has a deposit (or stake); when a validator joins
Deposit
class DepositMessage(Container):
pubkey: BLSPubkey
withdrawal_credentials: Bytes32
amount: Gwei
class DepositData(Container):
pubkey: BLSPubkey
withdrawal_credentials: Bytes32
amount: Gwei
signature: BLSSignature # Signing over DepositMessage
Observe that, as a rule, there can only be one validator pubkey
; if pubkey
is already present in the list of validators, then amount
is added to
Requiring the withdrawal_credentials
and the amount
to be signed by the public key prevents some vectors of attack. This is why we have DepositData(Container)
, i.e., a signed DepositMessage(Container)
.
A proof that a validator deposited is given by the following object.
class Deposit(Container):
proof: Vector[Bytes32, DEPOSIT_CONTRACT_TREE_DEPTH + 1] # Merkle path to deposit root
data: DepositData
Every validator has an effective_balance
(of at most 32 ETH), and its voting power is weighted by it. Any balance above this is ignored.
However, validator balances are stored, for efficiency reasons, in two places: (i) in the effective_balance
in the Validator
record and (ii) in the (exact) balances
in a separate record (in BeaconState
, see few sections below).
The exact balances change every epoch (due to rewards and penalties), so we store them in a compact array that requires rehashing only <32 MB to update, while the effective balances (which are used for all other computations that require validator balances) are updated using a hysteresis formula: if the effective balance is $n$ ETH, and if the exact balance goes below $n-0.25$ ETH, then the effective balance is set to $n-1$ ETH, and if the exact balance goes above $n+1.25$ ETH the effective balance is set to $n+1$ ETH.
Since the exact balance must change by at least a full $0.5$ ETH to trigger an effective balance update, this ensures an attacker can't make effective balances update every epoch -- and thus cause processing the chain to become very slow -- by repeatedly nudging the exact balances above, and then below, some threshold. [https://github.com/ethereum/annotated-spec/blob/master/phase0/beacon-chain.md]
This effective balances are update according to the following function.
def process_effective_balance_updates(state: BeaconState) -> None:
# Update effective balances with hysteresis
for index, validator in enumerate(state.validators):
balance = state.balances[index]
HYSTERESIS_INCREMENT = uint64(EFFECTIVE_BALANCE_INCREMENT // HYSTERESIS_QUOTIENT)
DOWNWARD_THRESHOLD = HYSTERESIS_INCREMENT * HYSTERESIS_DOWNWARD_MULTIPLIER
UPWARD_THRESHOLD = HYSTERESIS_INCREMENT * HYSTERESIS_UPWARD_MULTIPLIER
if (
balance + DOWNWARD_THRESHOLD < validator.effective_balance
or validator.effective_balance + UPWARD_THRESHOLD < balance
):
validator.effective_balance = min(balance - balance % EFFECTIVE_BALANCE_INCREMENT, MAX_EFFECTIVE_BALANCE)
Finally, deposits are processed through the following function.
def process_deposit(state: BeaconState, deposit: Deposit) -> None:
# Verify the Merkle branch
assert is_valid_merkle_branch(
leaf=hash_tree_root(deposit.data),
branch=deposit.proof,
depth=DEPOSIT_CONTRACT_TREE_DEPTH + 1, # Add 1 for the List length mix-in
index=state.eth1_deposit_index,
root=state.eth1_data.deposit_root,
)
# Deposits must be processed in order
state.eth1_deposit_index += 1
pubkey = deposit.data.pubkey
amount = deposit.data.amount
validator_pubkeys = [validator.pubkey for validator in state.validators]
if pubkey not in validator_pubkeys:
# Verify the deposit signature (proof of possession) which is not checked by the deposit contract
deposit_message = DepositMessage(
pubkey=deposit.data.pubkey,
withdrawal_credentials=deposit.data.withdrawal_credentials,
amount=deposit.data.amount,
)
domain = compute_domain(DOMAIN_DEPOSIT) # Fork-agnostic domain since deposits are valid across forks
signing_root = compute_signing_root(deposit_message, domain)
# Initialize validator if the deposit signature is valid
if bls.Verify(pubkey, signing_root, deposit.data.signature):
state.validators.append(get_validator_from_deposit(state, deposit))
state.balances.append(amount)
state.previous_epoch_participation.append(ParticipationFlags(0b0000_0000))
state.current_epoch_participation.append(ParticipationFlags(0b0000_0000))
state.inactivity_scores.append(uint64(0))
else:
# Increase balance by deposit amount
index = ValidatorIndex(validator_pubkeys.index(pubkey))
increase_balance(state, index, amount)
BeaconState
will be preseted few sections below.
A validator that follows its protocol during an execution is called honest. On the other hand, a faulty validator may crash or even deviate arbitrarily from its specification, e.g., when corrupted by an adversary; such validators are also called Byzantine. In particular, Byzantine processes can equivocate, i.e., they can send conflicting messages. We consider Byzantine faults here and assume the existence of a probabilistic poly-time adversary
We assume that a best-effort gossip primitive that will reach all validators is available. Moreover, we assume that messages from honest validator to honest validator are eventually received and cannot be forged. This includes messages sent by Byzantine validators, once they have been received by some honest validator
Time is divided into discrete rounds and the validators have synchronized clocks. We define the notion of slot as a collection of
During each epoch, the set of validators is partitioned in (disjoint) committees, one committee per slot. More details about committees can be found few sections below.
In a synchronous network, the message delay is upper-bounded by a constant
In a partially synchronous network in the sleepy model, communication is asynchronous until a global stabilization time (GST), after which communication becomes synchronous, i.e., message delays are bounded by
A view (at a given round
Gasper is a proof-of-stake consensus protocol obtained by the combination of two protocols: FFG Casper, a partially synchronous consensus protocol (or gadget), and a synchronous consensus protocol named LMD-GHOST. The latter evolved during the years, due to some problems. In the following sections, we present each protocol individually. Afterward, we demonstrate how the two interact with each other, culminating in the definition of Gasper.
FFG Casper is a partially synchronous consensus protocol atop a proposal mechanism. Its design seeks to finalize the proposed blocks, ensuring their safety even during potential network partitions. Once a block is finalized it cannot be reverted, and a conflicting finalized block cannot be obtained. Casper introduces accountability, i.e., if a validators violates some rule, it is possible to detect the violation and know which validator violated the rule. Accountability allows the system to penalize (or to slash) Byzantine validators, solving the nothing at stake problem.
Casper, introduced by Buterin and Griffith, works following a two-phase traditional propose-and-vote-based Byzantine fault tolerant (BFT) mechanism, such as PBFT or HotStuff. Differently from PBFT or HotStuff, Casper is not a fully-specified protocol and is designed to be a gadget that works on top of a provided blockchain protocol. Again, differently from PBFT or HotStuff, there is no leader in charge of assembling proposals which are instead generated across honest nodes by an underlying proposal mechanism which produces child blocks of existing blocks, forming an ever-growing block-tree. The root of the three is the genesis block
Checkpoint
class Checkpoint(Container):
epoch: Epoch
root: Root
Casper proceeds as follows. Validators participate in the protocol by casting votes on blocks in the block-tree formed by the underlying proposal mechanism. This means that in the context of Casper, messages exchanged among validators are votes for blocks.
A vote message consists of four fields: two blocks (called in the context of Casper, checkpoints)
Once a vote
Let
-
$h(t_1) = h(t_2)$ , i.e., a validator must not publish two distinct votes for the same target height; or -
$h(s_1) < h(s_2) < h(t_2) < h(t_1)$ , i.e., a validator must not vote within the span of its other votes.
If a validator violates either condition, the evidence of the violation can be observed, at which point the validator’s entire deposit is taken away (it is slashed) with a reward given to the submitter of the evidence transaction.
Casper satisfies the following two properties, and the proof can be found in the full paper.
-
Accountable Safety: Two conflicting checkpoints imply that more than
$\frac{1}{3}$ adversarial stake can be detected. -
Plausible Liveness: It is always possible to produce new finalized checkpoints, provided there exist blocks extending the justified checkpoint with the greatest height, and more than
$\frac{2}{3}$ of the validators' stake is honest.
The set of validators needs to be able to change. New validators must be able to join, and existing validators must be able to leave.
Voluntary Exit
class VoluntaryExit(Container):
epoch: Epoch # Earliest epoch when voluntary exit can be processed
validator_index: ValidatorIndex
class SignedVoluntaryExit(Container):
message: VoluntaryExit
signature: BLSSignature
When a validator wishes to exit from the validator list, it may create, sign, and broadcast a message of this type.
To accomplish this, the notion of dynasty of a block is introduced. The dynasty of a block
To generate two subsets of validators for any given dynasty
This implies a new way to define the notion of justified and finalized checkpoints. In particular, an ordered pair of checkpoints
LMD-GHOST, short for Latest Message Driven Greediest Heaviest Observed Sub-Tree, is a synchronous consensus protocol. The protocol proceeds in slots and epochs, as detailed above. During each slot, a block
Latest Message
@dataclass(eq=True, frozen=True)
class LatestMessage(object):
epoch: Epoch
root: Root
This struct represents the vote in the latest (meaning highest-epoch) valid attestation received from a validator.
Blocks situated at the start of an epoch are called epoch boundary blocks, playing a pivotal role as checkpoints, as we will observe later, for the FFG-Casper protocol.
Every validator
The LMD-GHOST fork-choice function (A fork-choice function is a deterministic function that takes as inputs the set of blocks and other messages that have been seen, i.e., a view, and outputs what the canonical chain is. This is required because there may be multiple valid chains to choose from) introduced by Zamfir while looking for a “correct-by-construction” consensus protocol.
The idea behind LMD-GHOST fork-choice function is that at any fork, the protocol uses the weights of the subtrees created by the fork as a heuristic and assumes the subtree with the heaviest weight is the right one, as evident from the name of the algorithm. The weight of a subtree is determined by the sum of the stake of the validators that have cast a vote, at every slot, on each single block forming such subtree.
We first show informally how LMD-GHOST fork-choice function works through an example, and then we present the algorithm that implements it.
Let us consider a validator set
![](https://storage.googleapis.com/ethereum-hackmd/upload_839d656f9a58978c3dfc8cff562eae92.jpeg =500x230)
The protocol proceeds as it follows. Start from the genesis block, every time there is a fork, choose the side where more of the latest messages support that block's subtree, and keep doing this until a block with no descendants is reached.
By computing for each block the subset of latest messages that support either the block or one of its descendants, we obtain the following.
![](https://storage.googleapis.com/ethereum-hackmd/upload_35d68aae9c279a0d9da725e63f443517.jpeg =500x200)
To compute the head, start at the beginning, and then at each fork pick the higher number: first, pick the bottom chain as it has 4 latest messages supporting it versus 1 for the single-block top chain, then at the next fork support the middle chain.
We now present the protocol in a more formal way. First we introduce the notion of weight.
Finally, given a view
The following algorithm implements the LMD-GHOST fork-choice function.
Note that the fork-choice function detailed above isn't the one presently in use within the Ethereum protocol. Instead, it represents a more basic, or vanilla, LMD-GHOST fork-choice function. In the subsequent sections, we will outline the fork-choice function that Ethereum actually employs, which has been slightly adjusted to accommodate the impact of the finality gadget, Casper, within the entire protocol.
As we already mentioned, Gasper is a proof-of-stake protocol obtained by combining the finality gadget Casper together with LMD-GHOST protocol, the latter equipped with (an FFG-aware variation of) the fork-choice LMD-GHOST, called Hybrid LMD-GHOST (HLMD-GHOST) fork-choice function.
Beacon State
class BeaconState(Container):
# Versioning
genesis_time: uint64
genesis_validators_root: Root
slot: Slot
fork: Fork
# History
latest_block_header: BeaconBlockHeader
block_roots: Vector[Root, SLOTS_PER_HISTORICAL_ROOT]
state_roots: Vector[Root, SLOTS_PER_HISTORICAL_ROOT]
historical_roots: List[Root, HISTORICAL_ROOTS_LIMIT]
# Eth1
eth1_data: Eth1Data
eth1_data_votes: List[Eth1Data, EPOCHS_PER_ETH1_VOTING_PERIOD * SLOTS_PER_EPOCH]
eth1_deposit_index: uint64
# Registry
validators: List[Validator, VALIDATOR_REGISTRY_LIMIT]
balances: List[Gwei, VALIDATOR_REGISTRY_LIMIT]
# Randomness
randao_mixes: Vector[Bytes32, EPOCHS_PER_HISTORICAL_VECTOR]
# Slashings
slashings: Vector[Gwei, EPOCHS_PER_SLASHINGS_VECTOR] # Per-epoch sums of slashed effective balances
# Participation
previous_epoch_participation: List[ParticipationFlags, VALIDATOR_REGISTRY_LIMIT]
current_epoch_participation: List[ParticipationFlags, VALIDATOR_REGISTRY_LIMIT]
# Finality
justification_bits: Bitvector[JUSTIFICATION_BITS_LENGTH] # Bit set for every recent justified epoch
previous_justified_checkpoint: Checkpoint
current_justified_checkpoint: Checkpoint
finalized_checkpoint: Checkpoint
# Inactivity
inactivity_scores: List[uint64, VALIDATOR_REGISTRY_LIMIT]
# Sync
current_sync_committee: SyncCommittee
next_sync_committee: SyncCommittee
# Execution
latest_execution_payload_header: ExecutionPayloadHeader # [New in Bellatrix]
The BeaconState
is what every validator must agree on, i.e., it is the state that every honest validator (should) reach through Gasper. It contains all the relevant information regarding the active validators, their (exact) balances, the history of the blockchain, the slashed validators, and also all the information related to justification and finalization. Everything is bundled into this single state object.
We describe the most relevant (for this document) fields in the following.
genesis_time
: used by the fork-choice rule to figure out what slot we're in, and to validate execution_payloads
;
genesis_validators_root
: used to uniquely identify the chain that we are on;
latest_block_header
: used to process blocks that are descendents of the previous block;
block_roots
and state_roots
are stored in lists until they are full. After that, the Merkle root is taken of both the lists together and appended to historical_roots
;
eth1_data
: is the latest agreed upon state of the deposit contract;
eth1_data_votes
: votes on Eth1Data
;
eth1_deposit_index
: total number of deposits that have been processed by the beacon chain;
validators
and balances
contain the list of all the validators and their exact balances, respecively. Recall that every validator has two balances, an effective one (stored in the Validator
object, and that is updated less frequently), and an exact one, that changes every epoch. Here the exact one is stored.
slashings
: used to store the total effective balance of all validators slashed in an epoch, for every epoch;
previous_epoch_participation
: used to record which validators participated in attesting during the previous epoch;
current_epoch_participation
: used to record which validators participated in attesting during the current epoch;
justification_bits
: used to keep track of the justification status of the last four epochs:
previous_justified_checkpoint
and current_justified_checkpoint
are the most recent justified Checkpoint
as it was during the previous epoch and the most recent justified Checkpoint
during the current epoch, respectively. They are used to filter attestations, i.e., blocks are considered valid if they include only attestations with a source checkpoint that matches the justified checkpoint the state. Moreover, they are used during the finalization process, following the rule of FFG Casper;
finalized_checkpoint
: used to keep track of the finalized checkpoint, as a result of FFG Casper. This guarantees that the state at or before the epoch with the finalized checkpoint will never be reverted, and the fork-choice rule does not need to go back more that this checkpoint;
latest_execution_payload_header
: header of the most recent execution payload.
def state_transition(state: BeaconState, signed_block: SignedBeaconBlock, validate_result: bool=True) -> None:
block = signed_block.message
# Process slots (including those with no blocks) since block
process_slots(state, block.slot)
# Verify signature
if validate_result:
assert verify_block_signature(state, signed_block)
# Process block
process_block(state, block)
# Verify state root
if validate_result:
assert block.state_root == hash_tree_root(state)
The state_transition
function is the function that is used to modify the BeaconState
, processing slots (through process_slots
), even in the case with no blocks, and blocks (through process_block
).
All the records/objects described in this box will be clearer after the next few sections.
In this section we show how to define justification and finalization in Gasper, and we present the Hybrid LMD-GHOST that is used among validators to choose the head/tip of the chain at any slot. Observe that in the context of Gasper, votes are often referred to as attestations -- we will use the terms interchangeably.
We start by presenting some preliminary notions.
For a block
However, a block could be an epoch boundary block in some chains but not others. For this reason, epoch boundary pairs (or pairs for short) (B, j) are introduced, where
![](https://storage.googleapis.com/ethereum-hackmd/upload_b9d78fc7c37d93db2c139d73279a33e9.png =500x300)
In the image above,
Finally, observe that, in Gasper, instead of justifying and finalizing checkpoint blocks as with Casper, epoch boundary pairs are justified and finalized.
Given a block
In Gasper, validators are partitioned into committees in each epoch, with one committee per slot.
Committee
In order to decrease the number of messages on the network while attesting, the set of validators is partitioned (within an epoch) into committees. Each validator participates in only one of the committee, i.e., committees within an epoch are disjoint.
The protocol adjusts the number of committees in an epoch based on the number of active validators in that epoch. The number of committees is defined through the following function.
def get_committee_count_per_slot(state: BeaconState, epoch: Epoch) -> uint64:
"""
Return the number of committees in each slot for the given ``epoch``.
"""
return max(uint64(1), min(
MAX_COMMITTEES_PER_SLOT,
uint64(len(get_active_validator_indices(state, epoch))) // SLOTS_PER_EPOCH // TARGET_COMMITTEE_SIZE,
))
For example, assuming
In the current implementation of Gasper, there are
At the start of a new epoch, all the existing committees are disbanded and the active validator set is divided into new committees. The composition of the committees is determined at the begining of an epoch. In particular, by considering the set of active validators for that epoch, and the RANDAO seed value at the start of the previous epoch (RANDAO is a mechanism to provide an in-protocol randomness [https://eth2book.info/altair/part2/building_blocks/randomness/#the-randao]),
def get_seed(state: BeaconState, epoch: Epoch, domain_type: DomainType) -> Bytes32:
"""
Return the seed at ``epoch``.
"""
mix = get_randao_mix(state, Epoch(epoch + EPOCHS_PER_HISTORICAL_VECTOR - MIN_SEED_LOOKAHEAD - 1)) # Avoid underflow
return hash(domain_type + uint_to_bytes(epoch) + mix)
the active validators are divided among the committees in an epoch through the following function.
def compute_committee(indices: Sequence[ValidatorIndex],
seed: Bytes32,
index: uint64,
count: uint64) -> Sequence[ValidatorIndex]:
"""
Return the committee corresponding to ``indices``, ``seed``, ``index``, and committee ``count``.
"""
start = (len(indices) * index) // count
end = (len(indices) * uint64(index + 1)) // count
return [indices[compute_shuffled_index(uint64(i), uint64(len(indices)), seed)] for i in range(start, end)]
with compute_shuffled_index
defined as it follows.
def compute_shuffled_index(index: uint64, index_count: uint64, seed: Bytes32) -> uint64:
"""
Return the shuffled index corresponding to ``seed`` (and ``index_count``).
"""
assert index < index_count
# Swap or not (https://link.springer.com/content/pdf/10.1007%2F978-3-642-32009-5_1.pdf)
# See the 'generalized domain' algorithm on page 3
for current_round in range(SHUFFLE_ROUND_COUNT):
pivot = bytes_to_uint64(hash(seed + uint_to_bytes(uint8(current_round)))[0:8]) % index_count
flip = (pivot + index_count - index) % index_count
position = max(index, flip)
source = hash(
seed
+ uint_to_bytes(uint8(current_round))
+ uint_to_bytes(uint32(position // 256))
)
byte = uint8(source[(position % 256) // 8])
bit = (byte >> (position % 8)) % 2
index = flip if bit else index
return index
Validators are assigned to committees randomly in order to prevent an attacker to dominate a single committee.
Other than the beacon committees, whose members, as we already said, attest to what they see as the head of the chain with the fork-choice rule HLMD-GHOST, the current implementation of Gasper also considers the sync committee.
The sync committee is a committee of
The purpose of the sync committee is to allow light clients, i.e., small nodes able to run on lower resource kit, to keep track of the chain of beacon block headers. Sync committees are (i) updated infrequently, and (ii) saved directly in the beacon state, allowing light clients to verify the sync committee with a Merkle branch from a block header that they already know about, and use the public keys in the sync committee to directly authenticate signatures of more recent blocks. [https://github.com/ethereum/annotated-spec/blob/master/altair/sync-protocol.md]
class SyncCommitteeMessage(Container):
# Slot to which this contribution pertains
slot: Slot
# Block root for this signature
beacon_block_root: Root
# Index of the validator that produced this signature
validator_index: ValidatorIndex
# Signature by the validator over the block root of `slot`
signature: BLSSignature
To determine sync committee assignments, a validator can run the following function.
def is_assigned_to_sync_committee(state: BeaconState,
epoch: Epoch,
validator_index: ValidatorIndex) -> bool:
sync_committee_period = compute_sync_committee_period(epoch)
current_epoch = get_current_epoch(state)
current_sync_committee_period = compute_sync_committee_period(current_epoch)
next_sync_committee_period = current_sync_committee_period + 1
assert sync_committee_period in (current_sync_committee_period, next_sync_committee_period)
pubkey = state.validators[validator_index].pubkey
if sync_committee_period == current_sync_committee_period:
return pubkey in state.current_sync_committee.pubkeys
else: # sync_committee_period == next_sync_committee_period
return pubkey in state.next_sync_committee.pubkeys
Here, epoch
is an epoch number within the current or next sync committee period, computed as it follows.
def compute_sync_committee_period(epoch: Epoch) -> uint64:
return epoch // EPOCHS_PER_SYNC_COMMITTEE_PERIOD
is_assigned_to_sync_committee
is a predicate that indicates the presence or absence of the validator in the corresponding sync committee for the queried sync committee period. [https://github.com/ethereum/consensus-specs/blob/dev/specs/altair/validator.md#containers]
Observe that a validator can be part of both a beacon committee and a sync committee.
More in details, in each slot, the protocol requires validators to carry out two types of work. One validator in the committee, called proposer, needs to propose a new block (which is a message containing the slot number, a pointer to the parent block, a set of pointers to all the attestations that the validator has accepted, but have not been included in any other ancestor block, and some implementation-specific data).
Proposer
def compute_proposer_index(state: BeaconState, indices: Sequence[ValidatorIndex], seed: Bytes32) -> ValidatorIndex:
"""
Return from ``indices`` a random index sampled by effective balance.
"""
assert len(indices) > 0
MAX_RANDOM_BYTE = 2**8 - 1
i = uint64(0)
total = uint64(len(indices))
while True:
candidate_index = indices[compute_shuffled_index(i % total, total, seed)]
random_byte = hash(seed + uint_to_bytes(uint64(i // 32)))[i % 32]
effective_balance = state.validators[candidate_index].effective_balance
if effective_balance * MAX_RANDOM_BYTE >= MAX_EFFECTIVE_BALANCE * random_byte:
return candidate_index
i += 1
This function chooses a proposer, accepts them with BALANCE/32
probability, and if it fails it keeps trying. This is done so that the probability of being selected as a proposer remains proportional to balance.
Beacon Block
class BeaconBlockBody(Container):
randao_reveal: BLSSignature
eth1_data: Eth1Data # Eth1 data vote
graffiti: Bytes32 # Arbitrary data
# Operations
proposer_slashings: List[ProposerSlashing, MAX_PROPOSER_SLASHINGS]
attester_slashings: List[AttesterSlashing, MAX_ATTESTER_SLASHINGS]
attestations: List[Attestation, MAX_ATTESTATIONS]
deposits: List[Deposit, MAX_DEPOSITS]
voluntary_exits: List[SignedVoluntaryExit, MAX_VOLUNTARY_EXITS]
sync_aggregate: SyncAggregate
# Execution
execution_payload: ExecutionPayload # [New in Bellatrix]
The BeaconBlock
is the block that the proposer for a given slot creates and communicates to the network. This block, if correct, will update the BeaconState
of every honest validator. Validators are randomly selected to propose BeaconBlock
s, and there can be at most one BeaconBlock
per slot.
randao_reveal
: record used for randomness purposes. The proposer generates it by signing the current epoch number with its private key;
eth1_data
: record used in a pre-merge version of Ethereum consensus protocol, and it is/was used for the deposits of validators;
graffiti
: record used by the proposer to insert whatever data it wishes. It has no protocol level significance;
proposer_slashings
: contains up to MAX_PROPOSER_SLASHINGS
ProposerSlashing
objects (see below);
attester_slashings
: contains up to MAX_ATTESTER_SLASHINGS
AttesterSlashing
objects (see below);
attestations
: contains up to MAX_ATTESTATIONS
Attestation objects
(see below). The proposer earns a reward for including well-packed aggregate attestations;
deposits
: if the block does not contain all the outstanding Deposits
, it is considered invalid;
voluntary_exits
: contains up to MAX_VOLUNTARY_EXITS
SignedVoluntaryExit
objects;
sync_aggregate
: is a record voted from the current sync committee for the chain head in the previous slot;
execution_payload
: Ethereum transactions are included in this record.
class BeaconBlock(Container):
slot: Slot
proposer_index: ValidatorIndex
parent_root: Root
state_root: Root
body: BeaconBlockBody
Moreover, everyone in the committee needs to attest to their head of the chain with an attestation
Note that, while attesting, a validator does two things at once: it is simultaneously casting a GHOST vote for its block and also a Casper FFG vote for the transition between the two epoch boundary pairs (akin to Casper’s checkpoint blocks).
Attestation
class AttestationData(Container):
slot: Slot
index: CommitteeIndex
# LMD GHOST vote
beacon_block_root: Root
# FFG vote
source: Checkpoint
target: Checkpoint
AttestationData
contains information about the attestation that a validator sent to the network in a given slot.
slot
: slot in which a validator is casting the attestation. It is recorded here for validation purposes;
index
: there can be several committees active in a single slot. This is the number of the committee that the validator belongs to in that slot;
beacon_block_root
is the validator's vote on the head block for that slot after locally running the LMD GHOST fork-choice rule. If the validator believes that the current slot is empty, then this record might be the root of a block from a previous slot;
source
: the best currently justified checkpoint for the Casper FFG finalisation process, according to the validator;
target
: the block at the start of the current epoch (EBB), also for Casper FFG finalization, according to the validator;
AttestationData
is wrapped in the following.
class Attestation(Container):
aggregation_bits: Bitlist[MAX_VALIDATORS_PER_COMMITTEE]
data: AttestationData
signature: BLSSignature
def process_attestation(state: BeaconState, attestation: Attestation) -> None:
data = attestation.data
assert data.target.epoch in (get_previous_epoch(state), get_current_epoch(state))
assert data.target.epoch == compute_epoch_at_slot(data.slot)
assert data.slot + MIN_ATTESTATION_INCLUSION_DELAY <= state.slot <= data.slot + SLOTS_PER_EPOCH
assert data.index < get_committee_count_per_slot(state, data.target.epoch)
committee = get_beacon_committee(state, data.slot, data.index)
assert len(attestation.aggregation_bits) == len(committee)
# Participation flag indices
participation_flag_indices = get_attestation_participation_flag_indices(state, data, state.slot - data.slot)
# Verify signature
assert is_valid_indexed_attestation(state, get_indexed_attestation(state, attestation))
# Update epoch participation flags
if data.target.epoch == get_current_epoch(state):
epoch_participation = state.current_epoch_participation
else:
epoch_participation = state.previous_epoch_participation
proposer_reward_numerator = 0
for index in get_attesting_indices(state, data, attestation.aggregation_bits):
for flag_index, weight in enumerate(PARTICIPATION_FLAG_WEIGHTS):
if flag_index in participation_flag_indices and not has_flag(epoch_participation[index], flag_index):
epoch_participation[index] = add_flag(epoch_participation[index], flag_index)
proposer_reward_numerator += get_base_reward(state, index) * weight
# Reward proposer
proposer_reward_denominator = (WEIGHT_DENOMINATOR - PROPOSER_WEIGHT) * WEIGHT_DENOMINATOR // PROPOSER_WEIGHT
proposer_reward = Gwei(proposer_reward_numerator // proposer_reward_denominator)
increase_balance(state, get_beacon_proposer_index(state), proposer_reward)
This function processes attestations included in BeaconBlockBody
. First it performs some validity checks such as, for example, that the target checkpoint and the attestation's slot must belong to the same epoch, or that the attestation must be no older than SLOTS_PER_EPOCH
(
Both proposal and attestation require to add a corresponding block and an attestation, respectively, to the validator's view, and then to broadcast it to the network. Observe that both proposing and attesting requires the committee member to run the same fork-choice function on the validator's own view.
To summarize, validators in Gasper cast either a proposal message, containing a new block on top of the canonical chain output by HLMD-GHOST fork-choice function, or an attestation message, containing two types of vote: an LMD-GHOST vote for the head of the canonical chain output by HLMD-GHOST fork-choice function and an FFG Casper vote between epoch boundary pairs.
In the previous section, the notion of justification and finalization were given in the context of FFG Casper. For Gasper, these are extended as it follows.
Recall an attestation
Given a view
With the notion of justification, it becomes clearer then what both
![](https://storage.googleapis.com/ethereum-hackmd/upload_0fd519d0f87f4080f1fe75b73e45830b.png =600x150)
A validator’s view
Everything here is analogous to Casper FFG. Inside the chain of
For a view
-
$(B_0,j),(B_1,j+1),...,(B_k,j+k)$ are adjacent epoch boundary pairs in$\mathscr{chain}(B_k)$ ; -
$(B_0,j),(B_1,j+1),...,(B_{k−1},j+k−1)$ are all in$J(G)$ ; -
$(B_0,j) \xrightarrow[]{J} (B_k,j+k)$ .
The set
Justification and Finalization
def process_justification_and_finalization(state: BeaconState) -> None:
# Initial FFG checkpoint values have a `0x00` stub for `root`.
# Skip FFG updates in the first two epochs to avoid corner cases that might result in modifying this stub.
if get_current_epoch(state) <= GENESIS_EPOCH + 1:
return
previous_indices = get_unslashed_participating_indices(state, TIMELY_TARGET_FLAG_INDEX, get_previous_epoch(state))
current_indices = get_unslashed_participating_indices(state, TIMELY_TARGET_FLAG_INDEX, get_current_epoch(state))
total_active_balance = get_total_active_balance(state)
previous_target_balance = get_total_balance(state, previous_indices)
current_target_balance = get_total_balance(state, current_indices)
weigh_justification_and_finalization(state, total_active_balance, previous_target_balance, current_target_balance)
FFG Casper happens in process_justification_and_finalization
.
get_unslashed_participating_indices()
: returns the validators that made a timely attestation with the type TIMELY_TARGET_FLAG_INDEX
during the epoch in question. It is used to calculate the proportion of stake that voted for the candidate checkpoint in the current and previous epochs. Observe that if a validator TIMELY_TARGET_FLAG_INDEX
set, it means that a correct source and target have been used in the attestation broadcasted by TIMELY_TARGET_FLAG_INDEX
. Moreover, if get_unslashed_participating_indices(state, TIMELY_TARGET_FLAG_INDEX, n)
;
get_total_active_balance(state: BeaconState)
: the sum of the effective balances of all active validators in the current epoch (slashed validators are not included);
get_total_balance(state: BeaconState, indices: Set[ValidatorIndex])
: returns the total balance of all validators in the list indices
passed in.
The function process_justification_and_finalization
takes the lists of unslashed validators in the current and previous epochs, gets their respective total balances, and calculates the sum of the effective balances of all active validators in the current epoch. Then, these aggregate balances are passed to the following function for justification and finalization.
def weigh_justification_and_finalization(state: BeaconState,
total_active_balance: Gwei,
previous_epoch_target_balance: Gwei,
current_epoch_target_balance: Gwei) -> None:
previous_epoch = get_previous_epoch(state)
current_epoch = get_current_epoch(state)
old_previous_justified_checkpoint = state.previous_justified_checkpoint
old_current_justified_checkpoint = state.current_justified_checkpoint
# Process justifications
state.previous_justified_checkpoint = state.current_justified_checkpoint
state.justification_bits[1:] = state.justification_bits[:JUSTIFICATION_BITS_LENGTH - 1]
state.justification_bits[0] = 0b0
if previous_epoch_target_balance * 3 >= total_active_balance * 2:
state.current_justified_checkpoint = Checkpoint(epoch=previous_epoch,
root=get_block_root(state, previous_epoch))
state.justification_bits[1] = 0b1
if current_epoch_target_balance * 3 >= total_active_balance * 2:
state.current_justified_checkpoint = Checkpoint(epoch=current_epoch,
root=get_block_root(state, current_epoch))
state.justification_bits[0] = 0b1
# Process finalizations
bits = state.justification_bits
# The 2nd/3rd/4th most recent epochs are justified, the 2nd using the 4th as source
if all(bits[1:4]) and old_previous_justified_checkpoint.epoch + 3 == current_epoch:
state.finalized_checkpoint = old_previous_justified_checkpoint
# The 2nd/3rd most recent epochs are justified, the 2nd using the 3rd as source
if all(bits[1:3]) and old_previous_justified_checkpoint.epoch + 2 == current_epoch:
state.finalized_checkpoint = old_previous_justified_checkpoint
# The 1st/2nd/3rd most recent epochs are justified, the 1st using the 3rd as source
if all(bits[0:3]) and old_current_justified_checkpoint.epoch + 2 == current_epoch:
state.finalized_checkpoint = old_current_justified_checkpoint
# The 1st/2nd most recent epochs are justified, the 1st using the 2nd as source
if all(bits[0:2]) and old_current_justified_checkpoint.epoch + 1 == current_epoch:
state.finalized_checkpoint = old_current_justified_checkpoint
Given a BeaconState
, the function weigh_justification_and_finalization
first handles the justification, and then the finalization.
Recall that the justification_bits
record contained in the BeaconState
is used to keep track of the justification status of the last four epochs: state.justification_bits[0]
represents the justification state of epoch state.justification_bits[3]=1
means that epoch
In particular, in the first part (# Process justifications), given the balances of the previous and the current epochs (i.e., the total amount of deposits of the active validators that cast a vote in the previous and current epochs), the two if
statements try to justify the previous and the current epoch's checkpoints, respectively. If both the if
statements return true, then before the finalization process starts we have state.justification_bits[0]=state.justification_bits[1]=1
.
For the second part, (# Process finalizations), observe that a
-
$(B_0,j),(B_1,j+1),(B_2,j+2)$ are adjacent epoch boundary pairs in$\mathscr{chain}(B_2)$ ; -
$(B_0,j),(B_1,j+1)$ are all in$J(G)$ ; -
$(B_0,j) \xrightarrow[]{J} (B_2,j+2)$ .
In weigh_justification_and_finalization
,
- the first condition reflects on requiring that
old_previous_justified_checkpoint.epoch + 3 == current_epoch
andold_previous_justified_checkpoint.epoch + 2 == current_epoch
; - the second condition reflects on requiring that
state.justification_bits[3]=state.justification_bits[2]=1
; and - the third condition reflects on requiring that
state.justification_bits[1]=1
. In particular, ifstate.justification_bits[0]=state.justification_bits[1]=1
(from the justification process above) and ifstate.justification_bits[3]=state.justification_bits[2]=1
, then the existence of supermajority links is implied.
We can now finally present HLMD-GHOST, a variation of LMD-GHOST. In this variation, the protocol freezes the state of the latest justified pair
One can think of each chain of a leaf block
The fork-choice function of Gasper went through several updates and changes during the years. To see how this evolved over the years, see this chapter.
fork-choice
The fork-choice is implemented through a store
object that contains received fork-choice-relevant information, and a function get_head(store)
. Store
represents the view
@dataclass
class Store(object):
time: uint64
genesis_time: uint64
justified_checkpoint: Checkpoint
finalized_checkpoint: Checkpoint
best_justified_checkpoint: Checkpoint
proposer_boost_root: Root
equivocating_indices: Set[ValidatorIndex]
blocks: Dict[Root, BeaconBlock] = field(default_factory=dict)
block_states: Dict[Root, BeaconState] = field(default_factory=dict)
checkpoint_states: Dict[Checkpoint, BeaconState] = field(default_factory=dict)
latest_messages: Dict[ValidatorIndex, LatestMessage] = field(default_factory=dict)
The records here are as it follows.
time
: the current time;
genesis_time
: the time of the genesis block of the chain;
justified_checkpoint
: the FFG-Casper-justified checkpoint that is used as the root of the HLMD-GHOST fork-choice;
finalized_checkpoint
: the last finalized checkpoint;
best_justified_checkpoint
: the justified checkpoint that we will switch to at the start of the next epoch;
proposer_boost_root
: a special LatestMessage
that stores the boost [ethereum/consensus-specs#2730] (see the section on Proposer Weight Boosting);
equivocating_indices
: list of validators that equivocated, i.e., cast contradictory votes [ethereum/consensus-specs#2845] (see the section on Equivocation Discounting);
blocks
: all blocks that we know about;
block_states
: the post-state of every block that we know about;
checkpoint_states
: the post-state of every checkpoint. This could be different from the post-state of the block referenced by the checkpoint in the case where there are skipped slots;
latest_messages
: the latest epoch and block voted for by each validator;
At genesis, let store = get_forkchoice_store(genesis_state)
and update store by running:
(i)
def on_tick(store: Store, time: uint64) -> None:
previous_slot = get_current_slot(store)
# update store time
store.time = time
current_slot = get_current_slot(store)
# Reset store.proposer_boost_root if this is a new slot
if current_slot > previous_slot:
store.proposer_boost_root = Root()
# Not a new epoch, return
if not (current_slot > previous_slot and compute_slots_since_epoch_start(current_slot) == 0):
return
# Update store.justified_checkpoint if a better checkpoint on the store.finalized_checkpoint chain
if store.best_justified_checkpoint.epoch > store.justified_checkpoint.epoch:
finalized_slot = compute_start_slot_at_epoch(store.finalized_checkpoint.epoch)
ancestor_at_finalized_slot = get_ancestor(store, store.best_justified_checkpoint.root, finalized_slot)
if ancestor_at_finalized_slot == store.finalized_checkpoint.root:
store.justified_checkpoint = store.best_justified_checkpoint
This function runs on each tick. At the end of each epoch, update the justified checkpoint used in the fork-choice;
(ii)
def on_attestation(store: Store, attestation: Attestation) -> None:
"""
Run ``on_attestation`` upon receiving a new ``attestation`` from either within a block or directly on the wire.
An ``attestation`` that is asserted as invalid may be valid at a later time,
consider scheduling it for later processing in such case.
"""
validate_on_attestation(store, attestation)
store_target_checkpoint_state(store, attestation.data.target)
# Get state at the `target` to fully validate attestation
target_state = store.checkpoint_states[attestation.data.target]
indexed_attestation = get_indexed_attestation(target_state, attestation)
assert is_valid_indexed_attestation(target_state, indexed_attestation)
# Update latest messages for attesting indices
update_latest_messages(store, indexed_attestation.attesting_indices, attestation)
This function runs whenever an attestation attestation is received; and
(iii)
def on_block(store: Store, signed_block: SignedBeaconBlock) -> None:
"""
Run ``on_block`` upon receiving a new block.
A block that is asserted as invalid due to unavailable PoW block may be valid at a later time,
consider scheduling it for later processing in such case.
"""
block = signed_block.message
# Parent block must be known
assert block.parent_root in store.block_states
# Make a copy of the state to avoid mutability issues
pre_state = copy(store.block_states[block.parent_root])
# Blocks cannot be in the future. If they are, their consideration must be delayed until they are in the past.
assert get_current_slot(store) >= block.slot
# Check that block is later than the finalized epoch slot (optimization to reduce calls to get_ancestor)
finalized_slot = compute_start_slot_at_epoch(store.finalized_checkpoint.epoch)
assert block.slot > finalized_slot
# Check block is a descendant of the finalized block at the checkpoint finalized slot
assert get_ancestor(store, block.parent_root, finalized_slot) == store.finalized_checkpoint.root
# Check the block is valid and compute the post-state
state = pre_state.copy()
state_transition(state, signed_block, True)
# [New in Bellatrix]
if is_merge_transition_block(pre_state, block.body):
validate_merge_block(block)
# Add new block to the store
store.blocks[hash_tree_root(block)] = block
# Add new state for this block to the store
store.block_states[hash_tree_root(block)] = state
# Add proposer score boost if the block is timely
time_into_slot = (store.time - store.genesis_time) % SECONDS_PER_SLOT
is_before_attesting_interval = time_into_slot < SECONDS_PER_SLOT // INTERVALS_PER_SLOT
if get_current_slot(store) == block.slot and is_before_attesting_interval:
store.proposer_boost_root = hash_tree_root(block)
# Update justified checkpoint
if state.current_justified_checkpoint.epoch > store.justified_checkpoint.epoch:
if state.current_justified_checkpoint.epoch > store.best_justified_checkpoint.epoch:
store.best_justified_checkpoint = state.current_justified_checkpoint
if should_update_justified_checkpoint(store, state.current_justified_checkpoint):
store.justified_checkpoint = state.current_justified_checkpoint
# Update finalized checkpoint
if state.finalized_checkpoint.epoch > store.finalized_checkpoint.epoch:
store.finalized_checkpoint = state.finalized_checkpoint
store.justified_checkpoint = state.current_justified_checkpoint
This function runs whenever a SignedBeaconBlock
is received.
Any of the above handlers that trigger an unhandled exception are considered invalid. Invalid calls to handlers must not modify store
.
Observe that best_justified_checkpoint
is stored to prevend an attack on FFG Casper [https://ethresear.ch/t/analysis-of-bouncing-attack-on-ffg/6113]. In particular, on_block
uses the following function.
def should_update_justified_checkpoint(store: Store, new_justified_checkpoint: Checkpoint) -> bool:
"""
To address the bouncing attack, only update conflicting justified
checkpoints in the fork-choice if in the early slots of the epoch.
Otherwise, delay incorporation of new justified checkpoint until next epoch boundary.
See https://ethresear.ch/t/prevention-of-bouncing-attack-on-ffg/6114 for more detailed analysis and discussion.
"""
if compute_slots_since_epoch_start(get_current_slot(store)) < SAFE_SLOTS_TO_UPDATE_JUSTIFIED:
return True
justified_slot = compute_start_slot_at_epoch(store.justified_checkpoint.epoch)
if not get_ancestor(store, new_justified_checkpoint.root, justified_slot) == store.justified_checkpoint.root:
return False
return True
The idea here is that we want to only change the last-justified-block within the first $\frac{1}{3}$ of an epoch. This prevents bouncing attacks of the following form. Observe that the following blocks, i.e.,
- Start from a scenario where in epoch $n$, $62%$ of validators support block $A$, and in epoch $n+1$, $62%$ of validators support a block $B$. Suppose that the attacker has $5%$ of the total stake. This scenario requires very exceptional networking conditions to get into; the point of the attack, however, is that if we get into such a scenario the attacker could perpetuate it, permanently preventing finality.
- Due to LMD-GHOST, $B$ is favored, and so validators are continuing to vote for $B$. However, the attacker suddenly publishes attestations worth $5%$ of the total stake tagged with epoch $n$ for block $A$, causing $A$ to get justified.
- In epoch $n+2$, $A$ is justified and so validators are attesting to $A'$, a descendant of $A$. When $A'$ gets to $62%$ support, the attacker publishes attestations worth $5%$ of total stake tagged with epoch $n+1$ for $B$. Now $B$ is justified and favored by the fork-choice.
- In epoch $n+3$, $B$ is justified, and so validators are attesting to $B'$, a descendant of $B$. When $B'$ gets to $62%$ support, the attacker publishes attestations worth $5%$ of total stake tagged with epoch $n+2$ for $A'$, etc.
This could continue forever, bouncing permanently between the two chains preventing any new block from being finalized. This attack can happen because the combined use of LMD-GHOST and FFG Casper creates a discontinuity, where a small shift in support for a block can outweigh a large amount of support for another block if that small shift pushes it past the $\frac{2}{3}$ threshold needed for justification. We block the attack by only allowing the latest justified block to change near the beginning of an epoch; this way, there is a full $\frac{2}{3}$ of an epoch during which honest validators agree on the head and have the opportunity to justify a block and thereby further cement it, at the same time causing the LMD-GHOST rule to strongly favor that head. This sets up that block to most likely be finalized in the next epoch. [https://github.com/ethereum/annotated-spec/blob/master/phase0/fork-choice.md#should_update_justified_checkpoint]
The following function initializes the store
given a particular block (the anchor_block
) that the fork-choice would start from. The provided anchor-state
will be regarded as a trusted state.
def get_forkchoice_store(anchor_state: BeaconState, anchor_block: BeaconBlock) -> Store:
assert anchor_block.state_root == hash_tree_root(anchor_state)
anchor_root = hash_tree_root(anchor_block)
anchor_epoch = get_current_epoch(anchor_state)
justified_checkpoint = Checkpoint(epoch=anchor_epoch, root=anchor_root)
finalized_checkpoint = Checkpoint(epoch=anchor_epoch, root=anchor_root)
proposer_boost_root = Root()
return Store(
time=uint64(anchor_state.genesis_time + SECONDS_PER_SLOT * anchor_state.slot),
genesis_time=anchor_state.genesis_time,
justified_checkpoint=justified_checkpoint,
finalized_checkpoint=finalized_checkpoint,
best_justified_checkpoint=justified_checkpoint,
proposer_boost_root=proposer_boost_root,
equivocating_indices=set(),
blocks={anchor_root: copy(anchor_block)},
block_states={anchor_root: copy(anchor_state)},
checkpoint_states={justified_checkpoint: copy(anchor_state)},
)
Finally, the main fork-choice rule is given by the following.
def get_head(store: Store) -> Root:
# Get filtered block-tree that only includes viable branches
blocks = get_filtered_block_tree(store)
# Execute the LMD-GHOST fork-choice
head = store.justified_checkpoint.root
while True:
children = [
root for root in blocks.keys()
if blocks[root].parent_root == head
]
if len(children) == 0:
return head
# Sort by latest attesting balance with ties broken lexicographically
# Ties broken by favoring block with lexicographically higher root
head = max(children, key=lambda root: (get_latest_attesting_balance(store, root), root))
The function get_head
returns the head of the chain. This follows the following procedure:
-
Get the latest justified block hash, call it $B$ (this is implicit in
get_filtered_block_tree
); -
Get the subtree of blocks rooted in $B$ (done by
get_filtered_block_tree
); - Filter that for blocks whose slot exceeds the slot of $B$;
-
Walk down the tree, at each step where a block has multiple children selecting the child with the strongest support i.e., higher
get_latest_attesting_balance
[https://github.com/ethereum/annotated-spec/blob/master/phase0/fork-choice.md#get_head]
Finally, as showed with FFG Casper, in Gasper validators must not make two distinct attestations
Slashing
def is_slashable_attestation_data(data_1: AttestationData, data_2: AttestationData) -> bool:
"""
Check if ``data_1`` and ``data_2`` are slashable according to Casper FFG rules.
"""
return (
# Double vote
(data_1 != data_2 and data_1.target.epoch == data_2.target.epoch) or
# Surround vote
(data_1.source.epoch < data_2.source.epoch and data_2.target.epoch < data_1.target.epoch)
)
If a validator violates either condition, the evidence of the violation can be observed, at which point the validator’s deposit is slashed with a reward given to the submitter of the evidence transaction.
More on Slashing
class ProposerSlashing(Container):
signed_header_1: SignedBeaconBlockHeader
signed_header_2: SignedBeaconBlockHeader
class AttesterSlashing(Container):
attestation_1: IndexedAttestation
attestation_2: IndexedAttestation
In this section we present the properties that Gasper should satisfy as a consensus protocol, following the formalization introduced by Neu et al.
The goal of a consensus protocol is to allow all participants to reach a common decision despite the presence of faulty ones. In our context, this translates into allowing honest validators to grow a chain that is finalized and where all blocks constitute consistent state transitions with each other. Here we assume validators being Byzantine, being potentially offline, or suffering latency problems. In other terms, this translates (informally) to the following properties.
-
Safety: The set of finalized blocks
$F(G)$ for any view$G$ can never contain two conflicting blocks. - Liveness: The set of finalized blocks can actually grow.
A novelty that blockchains have introduced is the notion of dynamically available protocols, i.e., consensus protocols that can support an unknown and variable number of participants. One limitation of these protocols, as a result of the CAP theorem, is that they are not tolerant to network partitions. In particular, when the network partitions, honest participants in a dynamically available protocol will think that many participants are asleep and keep confirming transactions, potentially leading to safety violations. A different approach is instead taken by standard permissioned BFT protocols such as PBFT or HotStuff. These are designed for partially synchronous networks, and a quorum of at least two-thirds of all the participants is required to finalize transactions, ensuring safety under network partition, but not liveness. Liveness is guaranteed only after GST. It is impossible for any consensus protocol to be both safe under network partition and live under dynamic participation.
In other words, the availability-finality dilemma says that there cannot be a consensus protocol for state-machine replication, one that outputs a single chain, that provides both properties. The next best thing is then to ask for a protocol with two confirmation rules that outputs two chains, one that provides availability under dynamic participation, and one that provides finality even under network partitions, and in the long run they should agree on a common account of history. [https://decentralizedthoughts.github.io/2020-11-01-ebb-and-flow-protocols-a-resolution-of-the-availability-finality-dilemma/]
Based on this observation, Neu et al. give a formalization of the properties that Gasper, according to its design goals, should satisfy. In particular, given the time model described at the beginning of this document, they define the notion of
-
Finality: The finalized chain
$𝖫𝖮𝖦_{\text{fin}}$ is guaranteed to be accountable safe, i.e., in the event of a safety violation, one can provably identify Byzantine validators that have violated the protocol, at all times, and live after$\max{𝖦𝖲𝖳,𝖦𝖠𝖳}$ , provided that fewer than$\beta_1$ proportion of all the nodes are adversarial. -
Dynamic Availability: If
$𝖦𝖲𝖳=0$ , i.e., if the network is synchronous, the available chain$𝖫𝖮𝖦_{\text{da}}$ is guaranteed to be safe and live at all times, provided that at all times fewer than$\beta_2$ proportion of the awake nodes are adversarial. -
Prefix:
$𝖫𝖮𝖦_{\text{fin}}$ is a prefix of$𝖫𝖮𝖦_{\text{da}}$ at all times.
In the case of Gasper, we want
In other words, together, Finality and Dynamic Availability say that the finalized chain $𝖫𝖮𝖦_{\text{fin}}$ is safe under network partitions, i.e., before $\max{𝖦𝖲𝖳,𝖦𝖠𝖳}$, and can experience liveness violations before GST or GAT, and afterwards (after validators wake up and the network turns synchronous) catches up with the available chain $𝖫𝖮𝖦_{\text{da}}$. The Prefix property ensures that eventually all clients, no matter what confirmation rule they follow, will still agree on a single account of history. [https://decentralizedthoughts.github.io/2020-11-01-ebb-and-flow-protocols-a-resolution-of-the-availability-finality-dilemma/]
In the context of Gasper, FFG Casper produces the finalized chain, whereas the LMD-GHOST protocol outputs the available chain. However, as we delve deeper into the second part of this document, we'll observe that LMD-GHOST faces challenges in handling dynamic availability. Various solutions have been suggested to mitigate these problems, and continuous research efforts are being made to refine and enhance its performance.
One of the problems that proof-of-stake consensus protocols have is that they are subject to long-range attacks. In a long-range attack, an adversary corrupts past participants in the consensus protocol in order to re-write the history of the blockchain. The reason why this attack can be placed in a proof-of-stake-based system is because, in such system, the creation of a block is costless, contrary to proof-of-work-based systems where some external resource, e.g., computation, must be spent in order to create a new block.
In practice, many proof-of-stake systems deal with long-range attacks by requiring key-evolving cryptography, using more refined chain selection rules, or other techniques. In the case of Ethereum, Gasper deals with long-range attacks through weak subjectivity checkpoints.
To understand what a weak subjectivity checkpoint is, we first properly recall the notion of weak subjectivity, as introduced by Buterin in 2014.
A consensus protocol can be categorized according to the following paradigms.
- Objective: a new node coming onto the network with no knowledge except (i) the protocol definition and (ii) the set of all blocks and other important messages that have been published can independently come to the exact same conclusion as the rest of the network on the current state. (This is the case of proof-of-work-based blockchains, such as Bitcoin)
- Subjective: the system has stable states where different nodes come to different conclusions, and a large amount of social information is required in order to participate. (This is the case of social-network-based blockchains, such as Stellar)
For proof-of-stake, a third paradigm can be defined.
- Weakly subjective: a new node coming onto the network with no knowledge except (i) the protocol definition, (ii) the set of all blocks and other important messages that have been published and (iii) a state from less than $N$ blocks ago that is known to be valid can independently come to the exact same conclusion as the rest of the network on the current state, unless there is an attacker that permanently has more than $X$ percent control over the consensus set. [https://blog.ethereum.org/2014/11/25/proof-stake-learned-love-weak-subjectivity]
In other terms, through weak subjectivity it is possible to obtain checkpoints that acts as new genesis block, and we forbid participants from reverting more than
Weak Subjectivity
Any Checkpoint
object can be used as a weak subjectivity checkpoint. These are distributed by providers, downloaded by users and/or distributed as a part of clients, and used as input while syncing a client.
def get_latest_weak_subjectivity_checkpoint_epoch(state, safety_decay=0.1):
# Returns the epoch of the latest weak subjectivity checkpoint for the given
# `state` and `safety_decay`. The default `safety_decay` used should be 10% (= 0.1)
# The calculations in this document do not account for the withdrawability delay.
# We should factor that in by adding MIN_VALIDATOR_WITHDRAWABILITY_DELAY to the
# calculated subjectivity period.
weak_subjectivity_mod = MIN_VALIDATOR_WITHDRAWABILITY_DELAY
val_count = len(get_active_validator_indices(state, get_current_epoch(state)))
if val_count >= MIN_PER_EPOCH_CHURN_LIMIT * CHURN_LIMIT_QUOTIENT:
weak_subjectivity_mod += 256 * ((safety_decay*CHURN_LIMIT_QUOTIENT/2) // 256)
else:
# This means val_count < MIN_PER_EPOCH_CHURN_LIMIT * CHURN_LIMIT_QUOTIENT
weak_subjectivity_mod += 256 * ((safety_decay*val_count/(2*MIN_PER_EPOCH_CHURN_LIMIT)) // 256)
return state.finalized_checkpoint.epoch - (state.finalized_checkpoint.epoch % weak_subjectivity_mod)
Two important notions in the context of weak subjectivity are the following.
- Weak subjectivity period: the number of recent epochs within which there must be a weak subjectivity checkpoint to ensure that an attacker who takes control of the validator set at the beginning of the period is slashed at least a minimum threshold in the event that a conflicting checkpoint is finalized; and
-
Safety decay
$D$ : the loss in the 1/3rd consensus safety margin of the FFG Casper mechanism because of the changing validator set. The new safety margin that the mechanism can tolerate becomes $\frac{1}{3} - D$. [https://notes.ethereum.org/@adiasg/weak-subjectvity-eth2]
Weak Subjectivity Period
def compute_weak_subjectivity_period(state: BeaconState) -> uint64:
"""
Returns the weak subjectivity period for the current ``state``.
This computation takes into account the effect of:
- validator set churn (bounded by ``get_validator_churn_limit()`` per epoch), and
- validator balance top-ups (bounded by ``MAX_DEPOSITS * SLOTS_PER_EPOCH`` per epoch).
A detailed calculation can be found at:
https://github.com/runtimeverification/beacon-chain-verification/blob/master/weak-subjectivity/weak-subjectivity-analysis.pdf
"""
ws_period = MIN_VALIDATOR_WITHDRAWABILITY_DELAY
N = len(get_active_validator_indices(state, get_current_epoch(state)))
t = get_total_active_balance(state) // N // ETH_TO_GWEI
T = MAX_EFFECTIVE_BALANCE // ETH_TO_GWEI
delta = get_validator_churn_limit(state)
Delta = MAX_DEPOSITS * SLOTS_PER_EPOCH
D = SAFETY_DECAY
if T * (200 + 3 * D) < t * (200 + 12 * D):
epochs_for_validator_set_churn = (
N * (t * (200 + 12 * D) - T * (200 + 3 * D)) // (600 * delta * (2 * t + T))
)
epochs_for_balance_top_ups = (
N * (200 + 3 * D) // (600 * Delta)
)
ws_period += max(epochs_for_validator_set_churn, epochs_for_balance_top_ups)
else:
ws_period += (
3 * N * D * t // (200 * Delta * (T - t))
)
return ws_period
def is_within_weak_subjectivity_period(store: Store, ws_state: BeaconState, ws_checkpoint: Checkpoint) -> bool:
# Clients may choose to validate the input state against the input Weak Subjectivity Checkpoint
assert ws_state.latest_block_header.state_root == ws_checkpoint.root
assert compute_epoch_at_slot(ws_state.slot) == ws_checkpoint.epoch
ws_period = compute_weak_subjectivity_period(ws_state)
ws_state_epoch = compute_epoch_at_slot(ws_state.slot)
current_epoch = compute_epoch_at_slot(get_current_slot(store))
return current_epoch <= ws_state_epoch + ws_period
Calculating the weak subjective period, as reported by Aditya Asgaonkar, results in the following table.
For example, assuming a validator set of at least
So what would a world powered by weakly subjective consensus look like? First of all, nodes that are always online would be fine; in those cases weak subjectivity is by definition equivalent to objectivity. Nodes that pop online once in a while, or at least once every $N$ blocks, would also be fine, because they would be able to constantly get an updated state of the network. However, new nodes joining the network, and nodes that appear online after a very long time, would not have the consensus algorithm reliably protecting them. Fortunately, for them, the solution is simple: the first time they sign up, and every time they stay offline for a very very long time, they need only get a recent block hash from a friend, a blockchain explorer, or simply their software provider, and paste it into their blockchain client as a checkpoint. They will then be able to securely update their view of the current state from there. [https://blog.ethereum.org/2014/11/25/proof-stake-learned-love-weak-subjectivity]
Neu et al. [https://ethresear.ch/t/a-balancing-attack-on-gasper-the-current-candidate-for-eth2s-beacon-chain/8079] have shown how the original version of Gasper, i.e., the one presented in the first part, suffers from a liveness issue that leads to loss of safety for the dynamically available ledger. They presented an attack against Gasper, called balancing attack, in the synchronous network model with adversarial network delay.
Recall that Gasper proceeds in epochs which are further subdivided into $C$ slots each. For simplicity, let $C$ divide $n$ so that every slot has a committee of size $\frac{n}{C}$. For each epoch, a random permutation of all $n$ validators assigns validators to slots’ committees and designates a proposer per slot. Per slot, the proposer produces a new block extending the tip determined by the fork-choice rule $HLMD(G)$ executed in local view $G$. Then, each validator of the slot’s committee decides what block to vote for using $HLMD(G)$ in local view $G.$ For the Casper FFG layer, a block can only become finalized if two-thirds of validators vote for it. The attacker aims to keep honest validators split between two options (left and right chain) indefinitely, so that neither option ever gets two-thirds votes and thus no block ever gets finalized. Key technique to maintain this split is that some adversarial validators (‘swayers’) withhold their votes and release them only at specific times and to specific subsets of honest nodes in order to influence the fork-choice of honest nodes and thus steer which honest nodes vote left/right. [https://arxiv.org/pdf/2009.04987.pdf]
The attacks requires an adversary to (i) know at what points in time honest validators execute HLMD-GHOST and (ii) be able to target a message for delivery to an honest validator just before a certain point in time. Moreover, it assumes that (iii) honest validators cannot quickly update each other about messages they have just received.
The adversary waits until the the proposer in the first slot of an epoch is Byzantine and there are at least six Byzantine validators in every slot of the epoch. Let us assume that this is the case in epoch
Observe that if the number of honest validator is not even, then the adversary recruits a Byzantine validator (
At this point, the adversary selectively releases withheld votes (from Byzantine validators (
The adversary continues this strategy to maintain the tie throughout epoch
During epoch
Finally, for epoch
The balancing attack just presented is based on the fact that an attacker, by manipulating the network, can create a disagreement at the end of each slot regarding which messages count for the fork-choice, and therefore a disagreement on which chain is the winning chain.
The proposed solution to avoid the balancing attack is to increase the attestation weight of the proposer of a block in a given slot, if some conditions are met. In details, let us assume that all the attesters assigned to slot
Proposer Boosting
Proposer LMD Score Boosting: [ethereum/consensus-specs#2730]
def on_block(store: Store, signed_block: SignedBeaconBlock) -> None:
...
# Add proposer score boost if the block is timely
time_into_slot = (store.time - store.genesis_time) % SECONDS_PER_SLOT
is_before_attesting_interval = time_into_slot < SECONDS_PER_SLOT // INTERVALS_PER_SLOT
if get_current_slot(store) == block.slot and is_before_attesting_interval:
store.proposer_boost_root = hash_tree_root(block)
...
Recall that on_block
runs whenever a SignedBeaconBlock
is received, updating the store
for the fork-choice rule.
Despite the proposer weight boosting solution presented above, Neu et al. have shown that the LMD aspect of HLMD-GHOST still enables balancing attacks to Gasper.
LMD is devised as it follows. Every validator has a local table of the latest message that it received from each other validator, and that can be updated if some conditions are met. In particular, when a validator
We present the example showing the attack presented by Neu at al. Let
During the first four slots the Byzantine proposers create two parallel chains of
For the fifth slot, the adversary includes all equivocating votes for the left and the right chains into two distinct blocks and attaches these blocks on the left and right chains, respectively. Then, it releases the two equivocating blocks from the fifth slot in such a way that roughly half of the honest validators see the left block first (let us call
Honest validators in
If now we assume that slot
At the end of slot
So, this repeats in subsequent slots, with the honest validators in
It is worth noting that, by assumption, if a message is received by an honest validator, then this message will be received also by every other honest validator. This implies that honest validators in
Equivocations are attributable faults, punishable by slashing a posteriori, but this does not prevent the attack vector a priori given that only one validator is required for it, and that there is no immediate recovery, because the same validator can continue producing equivocating attestations in subsequent slots as well. [https://arxiv.org/pdf/2209.03255.pdf]
We present the notion of equivocation discounting, a solution to the attack presented in the section right above. On a high level, with equivocation discounting, we rely on every validator eventually recognizing equivocations and discarding them by stopping to give them weight. Formally, we require:
-
fork-choice discounting: When a validator
$v_i$ runs HLMD-GHOST at slot$t$ , it only counts votes from eligible validators $v_j$s for which the view of$v_i$ contains at most a single vote for each slot $v_j$s voted for, i.e., which are not viewed to have equivocated at previous slots.
In other words, we discard equivocating attestations from the fork-choice, and discount, i.e., do not consider the weight of, all future votes of equivocating validators for fork-choice purposes. Together with the LMD rule, the protocol considers only each validator’s most recent vote (or attestation) and, among them, only the non-equivocating ones.
Finally, observe that if a Byzantine validator
Equivocation Discounting
Remove equivocating validators from fork-choice consideration: [ethereum/consensus-specs#2845]
def on_attestation(store: Store, attestation: Attestation, is_from_block: bool=False) -> None:
...
# Update latest messages for attesting indices
update_latest_messages(store, indexed_attestation.attesting_indices, attestation)
with update_latest_message
defined as
def update_latest_messages(store: Store, attesting_indices: Sequence[ValidatorIndex], attestation: Attestation) -> None:
target = attestation.data.target
beacon_block_root = attestation.data.beacon_block_root
non_equivocating_attesting_indices = [i for i in attesting_indices if i not in store.equivocating_indices]
for i in non_equivocating_attesting_indices:
if i not in store.latest_messages or target.epoch > store.latest_messages[i].epoch:
store.latest_messages[i] = LatestMessage(epoch=target.epoch, root=beacon_block_root)
def on_attester_slashing(store: Store, attester_slashing: AttesterSlashing) -> None:
"""
Run ``on_attester_slashing`` immediately upon receiving a new ``AttesterSlashing``
from either within a block or directly on the wire.
"""
attestation_1 = attester_slashing.attestation_1
attestation_2 = attester_slashing.attestation_2
assert is_slashable_attestation_data(attestation_1.data, attestation_2.data)
state = store.block_states[store.justified_checkpoint.root]
assert is_valid_indexed_attestation(state, attestation_1)
assert is_valid_indexed_attestation(state, attestation_2)
indices = set(attestation_1.attesting_indices).intersection(attestation_2.attesting_indices)
for index in indices:
store.equivocating_indices.add(index)
This function updates store.equivocating_indices
with the validators that cast equivocating votes.
The Avalanche Attack is an attack at proof-of-stake GHOST that combines selfish mining with equivocations. Observe that this attack is possible because of the assumed underlying proof-of-stake mechanism. Proof-of-work does not have equivocations. The attack exploits the fact that the weight of a block in GHOST is given by the number of ancestors that such block has. This differs from the vote-based variant of GHOST as used in PoS Ethereum, where block weight is determined by votes and potentially a proposal boost. [https://arxiv.org/pdf/2203.01315.pdf] However, a similar attack exists also in the vote-based paradigm. Finally, note that only GHOST is considered, because its variant with LMD does not suffer from this attack.
The adversary uses withheld blocks to displace an honest chain once it catches up in subtree weight with the number of withheld adversarial blocks. The withheld blocks are released in a flat but wide subtree, exploiting the fact that under the GHOST rule such a subtree can displace a long chain. Only two withheld blocks enter the canonical chain permanently, while the other withheld blocks can subsequently be reused (through equivocations) to build further subtrees to displace even more honest blocks. The attack exploits a specific weakness of the GHOST rule in combination with equivocations from PoS, namely that an adversary can reuse ‘uncle blocks’ in GHOST, and thus such equivocations contribute to the weight of multiple ancestors. Formal security proof of PoS GHOST seems doomed. https://ethresear.ch/t/avalanche-attack-on-proof-of-stake-ghost/11854
We present the example showing the attack presented by Neu at al. Let us assume that the adversary starts with
First, the adversary withholds its subtree of
Once honest nodes reach a chain of length
Note that the adversary can reuse blocks
Honest nodes now build on
Observe that, through equivocation discounting, this attack can no longer be placed. Equivocating blocks are not considered during the fork-choice.
A reorg is an event where a block that was part of the canonical chain becomes no longer part of the canonical chain because a competing block beat it out. Reorgs can occur naturally due to network latency (also called short reorgs), or maliciously. In the first case, let us consider for example the proof-of-work. Here, if two miners create blocks
Reorgs can be classified in two categories: ex post reorgs, i.e., an adversary observes a block which subsequently attempts to fork out, and ex ante reorgs, i.e., an adversary attempts to fork out a future block that is unknown to the adversary at the start of the attack.
Finality is a situation where a fork-choice rule so strongly favors a block that it is mathematically impossible, or economically infeasible for that block to get reorged. In some fork-choice rules, reorgs cannot happen; the fork-choice rule simply extends the existing chain by appending any blocks that have been finalized through consensus. In other fork-choice rules, reorgs are very frequent. [https://www.paradigm.xyz/2021/07/ethereum-reorgs-after-the-merge]
With Gasper, long reorgs, i.e., reorgs that aim to fork out several blocks, are not possible because all blocks that are deeper than
We present a simple ex ante reorgs where an adversary, being the proposer for slot
In slot $n + 1$ the adversary privately creates block $n + 1$ on block $n$ and attests to it. Honest validators of slot $n+1$ do not see any block and thus attest to block $n$ as head of the chain. In the next slot, an honest proposer publishes block $n + 2$ building on block $n$, which is the current head in their view. Simultaneously, the adversary finally publishes block $n + 1$ and the attestation voting for block $n + 1$. All honest validators of slot $n + 2$ attest to block $n + 1$ as head of the chain, because it has more weight than block $n + 2$. In the next slot block $n + 3$ is proposed building on block $n + 1$. Block $n + 2$ is reorged out. [https://eprint.iacr.org/2021/1413.pdf]
Proposer boosting does not mitigate all ex ante reorgs [https://ethresear.ch/t/change-fork-choice-rule-to-mitigate-balancing-and-reorging-attacks/11127]. Let
The notion of view-merge, initially introduced by the Highway protocol, aims to solve some problems that arise with the proposer boosting technique. For example, proposer boosting can be weaponized by Byzantine proposers to conclude a reorg, lowering the amount of attestations needed for it by the weight of boost. [https://ethresear.ch/t/change-fork-choice-rule-to-mitigate-balancing-and-reorging-attacks/11127]
The idea behind the view-merge is to join the view of the honest proposer with those of all the other honest validators before they cast a vote in their slot. The result is that in slots with a honest proposer all honest validators vote in favor of the honest proposal. [https://arxiv.org/pdf/2209.03255.pdf]
The general concept of view-merge is the following:
- Attesters freeze their view
$\Delta$ seconds before the beginning of a new slot, caching new messages for later processing. (Note that, currently, the duration of a slot in Gasper is $3\Delta$, with $\Delta$ be the message delay. In particular, the slot is divided into $3$ parts of $4$ seconds each: $4$ seconds to propose a block, $4$ seconds to attest, and $4$ seconds to aggregate the attestations.) - The proposer, instead, does not freeze its view, and proposes, based on its view, on top of the head of the chain at the beginning of its slot. Moreover, the proposer references all attestations and blocks it used in its fork-choice in some point-to-point message, which is propagated with the block.
- Attesters include the referenced attestations in their view, and attest based on the merged view.
If the network delay is less that
In this section we present Goldfish, a simplified variant of LMD-GHOST, introduced by D'Amato et al., that implements the notions of view-merge and equivocation discounting. Goldfish can tolerate dynamic participation, it supports subsampling of validators (at each slot, the protocol can pseudo-randomly select a small group of validators to run the protocol on behalf of the total validator set), and it is provably secure and reorg resilient in synchronous networks with dynamic participation, assuming a majority of the validators follows the protocol honestly. Finally, Goldfish implements the notion of vote expiry, i.e., during each slot only votes from the immediately preceding slot influence the protocol’s behavior.
The adversary decides for each round and each honest validator whether it is asleep or not. Asleep validators do not execute the protocol. Messages delivered to an asleep validator get delivered to it only once the validator is no longer asleep. When a validator stops being asleep, it becomes dreamy. During this phase, it joins the protocol, usually over multiple rounds, using a special joining procedure specified by the protocol. Upon completion of this procedure, the honest validator becomes awake and then follows the the protocol. Adversarial validators are always awake.
Goldfish implements a variant of GHOST, for the fork-choice rule, called GHOST-Eph. GHOST-Eph is a function that takes a view $G$ and slot $t$ as input, and finds the canonical GHOST-Eph chain determined by the votes within $G$ that were cast for slot $t$. More specifically, starting at the genesis block, the function iterates over a sequence of blocks from $G$, selecting as the next block, the child of the current block with the maximum number of validators that have cast a slot $t$ vote for a block within its subtree. This continues until it reaches a leaf of the block-tree, and outputs a complete chain from leaf to root. The fork-choice rule ignores votes from before slot $t$ in its decision (votes are ephemeral), lending GHOST-Eph its name. [https://arxiv.org/pdf/2209.03255.pdf]
Slots in Goldfish have a duration of $3\Delta$ rounds. At the beginning of slot $t$, i.e., at round $3\Delta t$, each awake honest validator $v_i$ checks if it is eligible to propose a block for slot $t$, i.e., if $v_i$ is a leader for slot $t$, by evaluating the verifiable random function (VRF) with secret key $\mathscr{vsk}$. If $v_i$ is the leader for slot $t$, then $v_i$ identifies the tip of its canonical GHOST-Eph chain using the slot $t-1$ votes in its view, and it broadcasts a proposal message containing (i) a new block extending the tip and (ii) the union of its view and buffer, $G \cup \mathcal{B}$. (Observe that the buffer
At round $3\Delta t + 2\Delta$, each honest awake validator $v_j$ merges its buffer $\mathcal{B}$ containing the votes received over the period $(3\Delta t + \Delta, 3\Delta t + 2\Delta]$ with its view $G$. Then, $v_j$ identifies the new tip of its canonical GHOST-Eph chain using the slot $t$ votes in its updated view, takes the prefix of this chain corresponding to blocks from slots $\le t-\kappa$, and outputs this confirmed prefix as the Goldfish ledger.
At the start of the next slot, i.e., at round $3\Delta t + 3\Delta$, the buffer of any honest validator contains all the votes received by honest validators in the period $(3\Delta t + \Delta, 3\Delta t + 2\Delta]$, i.e., all the votes which they have merged into their view at round $3\Delta t + 2\Delta$. In particular, the proposal message of an honest leader includes all such votes, ensuring that the view in an honest leader’s proposal message is a superset of any honest validator’s views at round $3\Delta(t+1)+\Delta$. [https://arxiv.org/pdf/2209.03255.pdf]
A fast confirmation rule can also be added to Goldfish, allowing validators to confirm honest blocks proposed at the tip of their canonical GHOST-Eph chains within the same slot under optimistic conditions, i.e., under high participation and honest supermajority. This is done by adding
At the start of each slot, the proposer merges buffered votes into their local view, determines their canonical chain, and proposes and broadcasts a block extending it, together with their local view. One-fourth into each slot, each voter merges the proposed view into their local view, determines their canonical chain, and casts a vote on the tip. Two-fourths into the slot, all awake validators merge their buffers into their local view and run the optimistic fast confirmation rule. In particular, at round $4\Delta t + 2\Delta$, a validator merges its buffer and view, then marks a block $B$ proposed at the same slot $t$ as fast confirmed if $G$ contains a subset $G'$ of slot $t$ votes by distinct validators for $B$ such that more than $\frac{3}{4}$ of the eligible voters of slot $t$ voted for $B$. In this case, $B$ and its prefix are output as the Goldfish ledger at round $4\Delta t + 3\Delta$. If no block is fast confirmed $2\Delta$ rounds into a slot in the view of a validator, the validator uses the $\kappa$-slots-deep confirmation rule, i.e., the slow confirmation rule, at round $4\Delta t + 3\Delta$ to output the Goldfish ledger. However, validators do not roll back their ledgers: if the validator has previously fast confirmed a block within the last $\kappa$ slots, it continues to output that block. Finally, three-fourths into the slot, all awake validators again merge their buffers into their local view, and output a ledger according to GHOST-Eph. [https://arxiv.org/pdf/2209.03255.pdf]
Goldfish guarantees the following properties, and the proof can be found in the full paper.
- Reorg resilience: Suppose the validator that has the proposal with the minimum VRF output within a slot is an honest validator. Then, the block proposed by that validator enters and stays in the canonical GHOST-Eph chain adopted by any honest validator at all future slots.
-
Dynamic availability: Under a synchronous network in the sleepy model (i.e., for GST
$= 0$ ), the ledger output by Goldfish provides$\frac{1}{2}$ -safety and$\frac{1}{2}$ -liveness at all times with overwhelming probability.
Observe that
Despite the security guarantees ensured by Goldfish, vote expiry as it is in this protocol leads to some problems. In particular, blocks in Goldfish do not accumulate safety against asynchrony as time goes on. This is because vote expiry after one slot means that Goldfish cannot tolerate a single slot in which all honest validators are asleep or in which they cannot hear from the other honest validators due to adversarially induced network delay. [https://arxiv.org/pdf/2209.03255.pdf]
Goldfish is not considered practically viable to replace LMD-GHOST in Ethereum, due to its brittleness to temporary asynchrony: even a single slot of asynchrony can lead to a catastrophic failure, jeopardizing the safety of any previously confirmed block.
D'Amato and Zanolini introduce Recent Latest Message Driven GHOST (RLMD-GHOST), a protocol that generalizes both LMD-GHOST and Goldfish. As the former, RLMD-GHOST implements the latest message rule (LMD). As the latter, it implements view-merge and vote expiry. Differently from Goldfish, where only votes from the most recent slot are considered, RLMD-GHOST is parameterized by a vote expiry period $\eta$, i.e., only messages from the most recent
D'Amato et al. introduce with Goldfish the notion of active validator (there, validators which have completed the joining protocol are simply called awake, and validators which are executing the joining protocol are called dreamy} and assume a modified condition, i.e,,
D'Amato and Zanolini follow this distinction between awake and active validators, and use
The sleepy model in which the adversary is constrained by
RLMD-GHOST is implemented in the $\tau$-sleepy model, which allows for more generalized and stronger constraints in the corruption and sleepiness power of the adversary. In other words, the honest validators that are actively participating in the consensus protocol at slot
RLMD-GHOST proceeds in slots consisting of
RLMD-GHOST results in a synchronous protocol that has interesting practical properties: it is dynamically available and reorg resilient in the $\tau$-sleepy model, assuming that the honest validators that are actively participating in the consensus protocol at slot
As one can observe, both RLMD-GHOST and Goldfish (and also LMD-GHOST) have the same structure: first there is a proposing phase, then a voting phase, and finally a merge phase. D'Amato and Zanolini generalize this structure by defining propose-vote-merge protocols. These are protocols that proceed in slots consisting of k rounds, each having a proposer
Propose: In this phase, which starts at the beginning of a slot, the proposer
Vote: Here, every validator
Merge: In this phase, every validator
Currently, Gasper takes between 64 and 95 slots to finalize blocks. Because of that, a significant portion of the chain is susceptible to reorgs. The possibility to capture MEV (Maximum Extractable Value) through such reorgs can then disincentivize honestly following the protocol, breaking the desired correspondence of honest and rational behavior. Moreover, the relatively long time to finality forces users to choose between economic security and faster transaction confirmation. This motivates the study of the so-called single slot finality protocols: consensus protocols that finalize a block in each slot and, more importantly, that finalize the block proposed at a given slot within such slot. A relevant video about single slot finality by Vitalik Buterin can be found here.
D'Amato and Zanolini propose a protocol that combines RLMD-GHOST with a finality gadget, resulting in a secure ebb-and-flow protocol that can finalize one block per slot. Importantly, the protocol they present can finalize the block proposed in a slot, within such slot.
Differently from the version of RLMD-GHOST presented above, here RLMD-GHOST proceeds in slots consisting of
In the figure above, head-votes represent votes cast with respect to RLMD-GHOST protocol, i.e., the dynamically available protocol/component. To be more specific, a head vote is a tuple [HEAD-VOTE,
A checkpoint is justified in a view
In the protocol by D'Amato and Zanolini, head votes work exactly as in RLMD-GHOST, or any propose-vote-merge protocol, i.e., validators vote for the output of their fork-choice rule: when it is time to vote, validator
Finally, a slot in the SSF protocol follows this structure
When a proposer is honest, the network is synchronous, and an honest supermajority is online, the outcome is that the proposal gets fast confirmed and then justified, before the end of the slot. Moreover, if honest validators see the justification before the next slot, they will never cast an FFG vote with an earlier source, and so the proposal will never be reorged, even if later the network becomes asynchronous.
The SSF protocol is implemented by the following algorithm
In this protocol, blocks proposed by honest proposers under good conditions have very strong reorg resilience guarantees. On the other hand, their unreorgability is not known to observers by the end of the slot, and moreover no economic penalty can yet be ensured in case of a reorg, so we rely at this point on honesty assumptions. Finality is only achieved at the earliest after two slots. In order to truly have single slot finality, in the sense that an honest proposal can be finalized (and is finalized, under synchrony) within its proposal slot, then another FFG voting round can be added, or, as D'Amato and Zanolini decided to do in the paper, validators send a different type of message acknowledging the justification. For example, if the checkpoint $(B,t)$ is justified at slot $t$, validators can send the acknowledgment message $((B,t), t)$, confirming their knowledge of the justification of $(B,t)$. This way, they signal that in future slots they will never cast an FFG vote with a source from a slot earlier than $t$. A slashing condition can be attached to this, almost identical to surround voting: it is slashable to cast an acknowledgment $((C,t),t)$ and an FFG vote $(A,t') \to (B,t'')$ with $t' < t < t''$, i.e., where the FFG vote surrounds the acknowledged checkpoint. Then, if 2/3 of the validators cast an acknowledgment, one can finalize the acknowledged checkpoint. [https://ethresear.ch/t/a-simple-single-slot-finality-protocol/14920]
The complete protocol is then the following