Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Restrict best LC update collection to canonical blocks #3553

Open
wants to merge 8 commits into
base: dev
Choose a base branch
from

Conversation

etan-status
Copy link
Contributor

@etan-status etan-status commented Nov 21, 2023

Currently, the best LC update for a sync committee period may refer to blocks that have later been orphaned, if they rank better than canonical blocks according to is_better_update. This was done because the most important task of the light client sync protocol is to track the correct next_sync_committee. However, practical implementation is quite tricky because existing infrastructure such as fork choice modules can only be reused in limited form when collecting light client data. Furthermore, it becomes impossible to deterministically obtain the absolute best LC update available for any given sync committee period, because orphaned blocks may become unavailable.

For these reasons, LightClientUpdate should only be served if they refer to data from the canonical chain as selected by fork choice. This also assists efforts for a reliable backward sync in the future.

minimal.zip
Extra test vectors based on v1.4.0-beta.7

Currently, the best LC update for a sync committee period may refer to
blocks that have later been orphaned, if they rank better than canonical
blocks according to `is_better_update`. This was done because the most
important task of the light client sync protocol is to track the correct
`next_sync_committee`. However, practical implementation is quite tricky
because existing infrastructure such as fork choice modules can only be
reused in limited form when collecting light client data. Furthermore,
it becomes impossible to deterministically obtain the absolute best LC
update available for any given sync committee period, because orphaned
blocks may become unavailable.

For these reasons, `LightClientUpdate` should only be served if they
refer to data from the canonical chain as selected by fork choice.
This also assists efforts for a reliable backward sync in the future.
etan-status added a commit to status-im/nimbus-eth2 that referenced this pull request Nov 21, 2023
Simplify best `LightClientUpdate` collection by tracking only canonical
data instead of tracking the best update across all branches within the
sync committee period.

- ethereum/consensus-specs#3553
etan-status added a commit to status-im/nimbus-eth2 that referenced this pull request Nov 21, 2023
Simplify best `LightClientUpdate` collection by tracking only canonical
data instead of tracking the best update across all branches within the
sync committee period.

- ethereum/consensus-specs#3553
Copy link
Collaborator

@dapplion dapplion left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This complicates implementations for little to no benefit, for that reason I oppose this somewhat. However I see the need for canonical updates to have a backfill protocol

@etan-status
Copy link
Contributor Author

Could you please elaborate on the complication aspect? It led to quite a bit of simplification in case of Nimbus:

One notable aspect is, that even with the old system, you'd need to track separate branches in a proper implementation, they are just on a per-period basis. As in, track the best LightClientUpdate for each (period, current_sync_committee, next_sync_committee). Only when finality advances, this can be simplified to track the best LightClientUpdate for each (period). This can be tested with the minimal preset, where non-finality of an entire sync committee period is feasible.

With the new system, that remains the same, but you track the best LightClientUpdate for each non-finalized block; same way, how we track many other aspects for the purpose of fork choice.

So, similar to regular fork choice (which is already present):

  • When a new block is added, compute the data and attach it to the memory structure.
  • When a new head is selected, read from the memory structure and persist to database.
  • On finality, purge from the memory structure.
    And, because the best LightClientUpdate doesn't change that often, can deduplicate the memory using a reference count (or, just use a ref object and have the language runtime itself deal with the count).

Regarding "little to no benefit", I think having canonical data made available on the network allows better reasoning.

  • No other API exposes orphaned data (unless maybe when explicitly asked for using a by-root request).
  • It also avoids complications when feeding the data into portal network, because the different nodes won't end up storing different versions of the data in the regular case.
  • Furthermore, it unlocks future backfill protocols for syncing the canonical history without recomputing it from the local database. Such a backfill protocol can include proofs of canonical history with the data, to ensure that, for example, someone isn't just serving an arbitrary history that ends up at the same head sync committee, and have your node then serve that possibly malicious early history (leading to the verifiable head sync committee) to others.
  • Finally, it allows providing a reference implementation with pyspecs, to ensure that most BNs are computing the same history for the same chain.
  • Other implementations are not disallowed, it's a should not, not a shall not.

@dapplion
Copy link
Collaborator

dapplion commented Dec 1, 2023

From offline chat, would be great to define a direction for a backfill spec to make the motivation for this PR stronger

@etan-status
Copy link
Contributor Author

From offline chat, would be great to define a direction for a backfill spec to make the motivation for this PR stronger

https://hackmd.io/@etan-status/electra-lc

etan-status added a commit to status-im/nimbus-eth2 that referenced this pull request Mar 3, 2024
Introduce a test runner for upcoming EF test suites related to canonical
light client data collection.

- ethereum/consensus-specs#3553
Copy link
Collaborator

@dapplion dapplion left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Adding this restriction is sensible to unlock backfilling in the future. SHOULD NOT lang is okay to give time to Lodestar to migrate and Lighthouse to catch up.

Thanks for the thorough test! Def helpful to develop this

@etan-status
Copy link
Contributor Author

minimal.zip
Extra test vectors based on v1.4.0-beta.7

etan-status added a commit to etan-status/consensus-specs that referenced this pull request Mar 4, 2024
Beacon nodes can only compute light client data locally if they have the
corresponding `BeaconState` available. This is not the case for blocks
before the initially synced checkpoint state. The p2p-interface defines
endpoints to sync light client data, but it only supports forward sync.

To enable beacon nodes to backfill light client data, we must ensure
that a malicious peer cannot convince us of fraudulent data. While it
is possible to verify light client data against the locally backfilled
blocks, blocks are not necessarily available anymore on libp2p as they
are subject to `MIN_EPOCHS_FOR_BLOCK_REQUESTS`. Light client data stays
relevant for more than 5 months, and without validating it against local
block data it is impossible to distinguish canonical light client data
from fraudulent light client data that eventually culminates in a shared
history; the old periods in that case could still be manipulated.
Furthermore, agreeing on canonical data improves caching performance and
is relevant, e.g., for the portal network.

To support efficient proof that a `LightClientUpdate` is canonical, it
is proposed to minimally extend the `BeaconState` to track the best
`SyncAggregate` of the current and previous sync committee period,
according to an implementation-independent ranking function.
The proposed ranking function is compatible with what consensus nodes
implementing ethereum#3553 are
already making available across libp2p and REST transports.
It is based on and compatible with the `is_better_update` function in
`specs/altair/light-client/sync-protocol.md`.

There are three minor differences to `is_better_update`:

1. `is_better_update` runs in the LC, so runs without fork choice.
   It needs extra conditions to prefer older data over newer data.
   The `BeaconState` ranking function can use simpler logic.
2. The LC is always initialized from a post-Altair finalized checkpoint.
   This assumption does not hold in theoretical edge cases, requiring an
   extra guard for `ALTAIR_FORK_EPOCH` in the `BeaconState` function.
3. `is_better_update` has to deal with BNs serving incomplete data while
   they are still backfilling. This is not the case with `BeaconState`.

Once the data is available in the `BeaconState`, a light client data
backfill protocol could be defined that serves, for past periods:

1. A `LightClientUpdate` from requested `period` + 1 that proves
   that the entirety of `period` is finalized.
2. `BeaconState.historical_summaries[period].block_summary_root`
   at (1)'s `attested_header.beacon.state_root` + Merkle proof.
3. For each epoch's slot 0 block within requested `period`, the
   corresponding `LightClientHeader` + Merkle multi-proof for the
   block's inclusion into (2)'s `block_summary_root`.
4. For each of the entries from (3) with `beacon.slot` within `period`,
   the `current_sync_committee_branch` + Merkle proof for constructing
   `LightClientBootstrap`.
5. If (4) is not empty, the requested `period`'s
   `current_sync_committee`.
6. The best `LightClientUpdate` from `period`, if one exists,
   + Merkle proof that its `sync_aggregate` + `signature_slot` is
   selected as the canonical best one in (1)'s
   `attested_header.beacon.state_root`.

Only the proof in (6) depends on `BeaconState` tracking the best
light client data. This modification would enshrine the logic of a
subset of `is_better_update`, but does not require adding any
`LightClientXyz` data structures to the `BeaconState`.
@etan-status
Copy link
Contributor Author

✅ Nimbus 24.2.2 passing the additional tests.

etan-status added a commit to etan-status/consensus-specs that referenced this pull request Mar 4, 2024
Beacon nodes can only compute light client data locally if they have the
corresponding `BeaconState` available. This is not the case for blocks
before the initially synced checkpoint state. The p2p-interface defines
endpoints to sync light client data, but it only supports forward sync.

To enable beacon nodes to backfill light client data, we must ensure
that a malicious peer cannot convince us of fraudulent data. While it
is possible to verify light client data against the locally backfilled
blocks, blocks are not necessarily available anymore on libp2p as they
are subject to `MIN_EPOCHS_FOR_BLOCK_REQUESTS`. Light client data stays
relevant for more than 5 months, and without validating it against local
block data it is impossible to distinguish canonical light client data
from fraudulent light client data that eventually culminates in a shared
history; the old periods in that case could still be manipulated.
Furthermore, agreeing on canonical data improves caching performance and
is relevant, e.g., for the portal network.

To support efficient proof that a `LightClientUpdate` is canonical, it
is proposed to minimally extend the `BeaconState` to track the best
`SyncAggregate` of the current and previous sync committee period,
according to an implementation-independent ranking function.
The proposed ranking function is compatible with what consensus nodes
implementing ethereum#3553 are
already making available across libp2p and REST transports.
It is based on and compatible with the `is_better_update` function in
`specs/altair/light-client/sync-protocol.md`.

There are three minor differences to `is_better_update`:

1. `is_better_update` runs in the LC, so runs without fork choice.
   It needs extra conditions to prefer older data over newer data.
   The `BeaconState` ranking function can use simpler logic.
2. The LC is always initialized from a post-Altair finalized checkpoint.
   This assumption does not hold in theoretical edge cases, requiring an
   extra guard for `ALTAIR_FORK_EPOCH` in the `BeaconState` function.
3. `is_better_update` has to deal with BNs serving incomplete data while
   they are still backfilling. This is not the case with `BeaconState`.

Once the data is available in the `BeaconState`, a light client data
backfill protocol could be defined that serves, for past periods:

1. A `LightClientUpdate` from requested `period` + 1 that proves
   that the entirety of `period` is finalized.
2. `BeaconState.historical_summaries[period].block_summary_root`
   at (1)'s `attested_header.beacon.state_root` + Merkle proof.
3. For each epoch's slot 0 block within requested `period`, the
   corresponding `LightClientHeader` + Merkle multi-proof for the
   block's inclusion into (2)'s `block_summary_root`.
4. For each of the entries from (3) with `beacon.slot` within `period`,
   the `current_sync_committee_branch` + Merkle proof for constructing
   `LightClientBootstrap`.
5. If (4) is not empty, the requested `period`'s
   `current_sync_committee`.
6. The best `LightClientUpdate` from `period`, if one exists, +
   Merkle proof that its `sync_aggregate` + `signature_slot` is
   selected as the canonical best one in (1)'s
   `attested_header.beacon.state_root`.

Only the proof in (6) depends on `BeaconState` tracking the best
light client data. This modification would enshrine the logic of a
subset of `is_better_update`, but does not require adding any
`LightClientXyz` data structures to the `BeaconState`.
etan-status added a commit to etan-status/consensus-specs that referenced this pull request Mar 4, 2024
Beacon nodes can only compute light client data locally if they have the
corresponding `BeaconState` available. This is not the case for blocks
before the initially synced checkpoint state. The p2p-interface defines
endpoints to sync light client data, but it only supports forward sync.

To enable beacon nodes to backfill light client data, we must ensure
that a malicious peer cannot convince us of fraudulent data. While it
is possible to verify light client data against the locally backfilled
blocks, blocks are not necessarily available anymore on libp2p as they
are subject to `MIN_EPOCHS_FOR_BLOCK_REQUESTS`. Light client data stays
relevant for more than 5 months, and without validating it against local
block data it is impossible to distinguish canonical light client data
from fraudulent light client data that eventually culminates in a shared
history; the old periods in that case could still be manipulated.
Furthermore, agreeing on canonical data improves caching performance and
is relevant, e.g., for the portal network.

To support efficient proof that a `LightClientUpdate` is canonical, it
is proposed to minimally extend the `BeaconState` to track the best
`SyncAggregate` of the current and previous sync committee period,
according to an implementation-independent ranking function.
The proposed ranking function is compatible with what consensus nodes
implementing ethereum#3553 are
already making available across libp2p and REST transports.
It is based on and compatible with the `is_better_update` function in
`specs/altair/light-client/sync-protocol.md`.

There are three minor differences to `is_better_update`:

1. `is_better_update` runs in the LC, so runs without fork choice.
   It needs extra conditions to prefer older data over newer data.
   The `BeaconState` ranking function can use simpler logic.
2. The LC is always initialized from a post-Altair finalized checkpoint.
   This assumption does not hold in theoretical edge cases, requiring an
   extra guard for `ALTAIR_FORK_EPOCH` in the `BeaconState` function.
3. `is_better_update` has to deal with BNs serving incomplete data while
   they are still backfilling. This is not the case with `BeaconState`.

Once the data is available in the `BeaconState`, a light client data
backfill protocol could be defined that serves, for past periods:

1. A `LightClientUpdate` from requested `period` + 1 that proves
   that the entirety of `period` is finalized.
2. `BeaconState.historical_summaries[period].block_summary_root`
   at (1)'s `attested_header.beacon.state_root` + Merkle proof.
3. For each epoch's slot 0 block within requested `period`, the
   corresponding `LightClientHeader` + Merkle multi-proof for the
   block's inclusion into (2)'s `block_summary_root`.
4. For each of the entries from (3) with `beacon.slot` within `period`,
   the `current_sync_committee_branch` + Merkle proof for constructing
   `LightClientBootstrap`.
5. If (4) is not empty, the requested `period`'s
   `current_sync_committee`.
6. The best `LightClientUpdate` from `period`, if one exists, +
   Merkle proof that its `sync_aggregate` + `signature_slot` is
   selected as the canonical best one in (1)'s
   `attested_header.beacon.state_root`.

Only the proof in (6) depends on `BeaconState` tracking the best
light client data. This modification would enshrine the logic of a
subset of `is_better_update`, but does not require adding any
`LightClientXyz` data structures to the `BeaconState`.
etan-status added a commit to status-im/nimbus-eth2 that referenced this pull request Mar 5, 2024
Introduce a test runner for upcoming EF test suites related to canonical
light client data collection.

- ethereum/consensus-specs#3553
@etan-status
Copy link
Contributor Author

@hwwhww anything still blocking this?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants