Skip to content

Commit

Permalink
some fixes to please cargo-spellcheck (#3550)
Browse files Browse the repository at this point in the history
* some fixes to please cargo-spellcheck

* some (not all) fixes for the impl guide

* fix
  • Loading branch information
ordian committed Aug 2, 2021
1 parent 2cfda98 commit 33fe763
Show file tree
Hide file tree
Showing 24 changed files with 160 additions and 143 deletions.
4 changes: 2 additions & 2 deletions polkadot/parachain/src/primitives.rs
Expand Up @@ -271,7 +271,7 @@ impl IsSystem for Sibling {
}
}

/// This type can be converted into and possibly from an AccountId (which itself is generic).
/// This type can be converted into and possibly from an [`AccountId`] (which itself is generic).
pub trait AccountIdConversion<AccountId>: Sized {
/// Convert into an account ID. This is infallible.
fn into_account(&self) -> AccountId;
Expand Down Expand Up @@ -300,7 +300,7 @@ impl<'a> parity_scale_codec::Input for TrailingZeroInput<'a> {
}

/// Format is b"para" ++ encode(parachain ID) ++ 00.... where 00... is indefinite trailing
/// zeroes to fill AccountId.
/// zeroes to fill [`AccountId`].
impl<T: Encode + Decode + Default> AccountIdConversion<T> for Id {
fn into_account(&self) -> T {
(b"para", self)
Expand Down
3 changes: 2 additions & 1 deletion polkadot/roadmap/implementers-guide/README.md
@@ -1,6 +1,7 @@
# The Polkadot Parachain Host Implementers' Guide

The implementers' guide is compiled from several source files with [mdBook](https://github.com/rust-lang/mdBook). To view it live, locally, from the repo root:
The implementers' guide is compiled from several source files with [`mdBook`](https://github.com/rust-lang/mdBook).
To view it live, locally, from the repo root:

```sh
cargo install mdbook mdbook-linkcheck mdbook-graphviz
Expand Down
24 changes: 12 additions & 12 deletions polkadot/roadmap/implementers-guide/src/SUMMARY.md
Expand Up @@ -11,18 +11,18 @@
- [Architecture Overview](architecture.md)
- [Messaging Overview](messaging.md)
- [Runtime Architecture](runtime/README.md)
- [Initializer Module](runtime/initializer.md)
- [Configuration Module](runtime/configuration.md)
- [Shared](runtime/shared.md)
- [Disputes Module](runtime/disputes.md)
- [Paras Module](runtime/paras.md)
- [Scheduler Module](runtime/scheduler.md)
- [Inclusion Module](runtime/inclusion.md)
- [ParaInherent Module](runtime/parainherent.md)
- [DMP Module](runtime/dmp.md)
- [UMP Module](runtime/ump.md)
- [HRMP Module](runtime/hrmp.md)
- [Session Info Module](runtime/session_info.md)
- [`Initializer` Module](runtime/initializer.md)
- [`Configuration` Module](runtime/configuration.md)
- [`Shared`](runtime/shared.md)
- [`Disputes` Module](runtime/disputes.md)
- [`Paras` Module](runtime/paras.md)
- [`Scheduler` Module](runtime/scheduler.md)
- [`Inclusion` Module](runtime/inclusion.md)
- [`ParaInherent` Module](runtime/parainherent.md)
- [`DMP` Module](runtime/dmp.md)
- [`UMP` Module](runtime/ump.md)
- [`HRMP` Module](runtime/hrmp.md)
- [`Session Info` Module](runtime/session_info.md)
- [Runtime APIs](runtime-api/README.md)
- [Validators](runtime-api/validators.md)
- [Validator Groups](runtime-api/validator-groups.md)
Expand Down
4 changes: 2 additions & 2 deletions polkadot/roadmap/implementers-guide/src/disputes-flow.md
Expand Up @@ -82,7 +82,7 @@ Only peers that already voted shall be queried for the dispute availability data

The peer to be queried for disputes data, must be picked at random.

A validator must retain code, persisted validation data and PoV until a block, that contains the dispute resolution, is finalized - plus an additional 24h.
A validator must retain code, persisted validation data and PoV until a block, that contains the dispute resolution, is finalized - plus an additional 24 hours.

Dispute availability gossip must continue beyond the dispute resolution, until the post resolution timeout expired (equiv to the timeout until which additional late votes are accepted).

Expand All @@ -108,7 +108,7 @@ If the count of votes pro or cons regarding the disputed block, reaches the requ

If a block is found invalid by a dispute resolution, it must be blacklisted to avoid resync or further build on that chain if other chains are available (to be detailed in the grandpa fork choice rule).

A dispute accepts Votes after the dispute is resolved, for 1d.
A dispute accepts Votes after the dispute is resolved, for 1 day.

If a vote is received, after the dispute is resolved, the vote shall still be recorded in the state root, albeit yielding less reward.

Expand Down
Expand Up @@ -131,7 +131,7 @@ Ensure a vector is present in `pending_known` for each hash in the view that doe

Invoke `unify_with_peer(peer, view)` to catch them up to messages we have.

We also need to use the `view.finalized_number` to remove the `PeerId` from any blocks that it won't be wanting information about anymore. Note that we have to be on guard for peers doing crazy stuff like jumping their 'finalized_number` forward 10 trillion blocks to try and get us stuck in a loop for ages.
We also need to use the `view.finalized_number` to remove the `PeerId` from any blocks that it won't be wanting information about anymore. Note that we have to be on guard for peers doing crazy stuff like jumping their `finalized_number` forward 10 trillion blocks to try and get us stuck in a loop for ages.

One of the safeguards we can implement is to reject view updates from peers where the new `finalized_number` is less than the previous.

Expand Down Expand Up @@ -192,7 +192,7 @@ We maintain a few invariants:

The algorithm is the following:

* Load the BlockEntry using `assignment.block_hash`. If it does not exist, report the source if it is `MessageSource::Peer` and return.
* Load the `BlockEntry` using `assignment.block_hash`. If it does not exist, report the source if it is `MessageSource::Peer` and return.
* Compute a fingerprint for the `assignment` using `claimed_candidate_index`.
* If the source is `MessageSource::Peer(sender)`:
* check if `peer` appears under `known_by` and whether the fingerprint is in the knowledge of the peer. If the peer does not know the block, report for providing data out-of-view and proceed. If the peer does know the block and the `sent` knowledge contains the fingerprint, report for providing replicate data and return, otherwise, insert into the `received` knowledge and return.
Expand All @@ -218,7 +218,7 @@ The algorithm is the following:

Imports an approval signature referenced by block hash and candidate index:

* Load the BlockEntry using `approval.block_hash` and the candidate entry using `approval.candidate_entry`. If either does not exist, report the source if it is `MessageSource::Peer` and return.
* Load the `BlockEntry` using `approval.block_hash` and the candidate entry using `approval.candidate_entry`. If either does not exist, report the source if it is `MessageSource::Peer` and return.
* Compute a fingerprint for the approval.
* Compute a fingerprint for the corresponding assignment. If the `BlockEntry`'s knowledge does not contain that fingerprint, then report the source if it is `MessageSource::Peer` and return. All references to a fingerprint after this refer to the approval's, not the assignment's.
* If the source is `MessageSource::Peer(sender)`:
Expand Down
Expand Up @@ -13,36 +13,36 @@ In particular this subsystem is responsible for:
this is to ensure availability by at least 2/3+ of all validators, this
happens after a candidate is backed.
- Fetch `PoV` from validators, when requested via `FetchPoV` message from
backing (pov_requester module).
-
backing (`pov_requester` module).

The backing subsystem is responsible of making available data available in the
local `Availability Store` upon validation. This subsystem will serve any
network requests by querying that store.

## Protocol

This subsystem does not handle any peer set messages, but the `pov_requester`
does connecto to validators of the same backing group on the validation peer
does connect to validators of the same backing group on the validation peer
set, to ensure fast propagation of statements between those validators and for
ensuring already established connections for requesting `PoV`s. Other than that
this subsystem drives request/response protocols.

Input:

- OverseerSignal::ActiveLeaves(`[ActiveLeavesUpdate]`)
- AvailabilityDistributionMessage{msg: ChunkFetchingRequest}
- AvailabilityDistributionMessage{msg: PoVFetchingRequest}
- AvailabilityDistributionMessage{msg: FetchPoV}
- `OverseerSignal::ActiveLeaves(ActiveLeavesUpdate)`
- `AvailabilityDistributionMessage{msg: ChunkFetchingRequest}`
- `AvailabilityDistributionMessage{msg: PoVFetchingRequest}`
- `AvailabilityDistributionMessage{msg: FetchPoV}`

Output:

- NetworkBridgeMessage::SendRequests(`[Requests]`, IfDisconnected::TryConnect)
- AvailabilityStore::QueryChunk(candidate_hash, index, response_channel)
- AvailabilityStore::StoreChunk(candidate_hash, chunk)
- AvailabilityStore::QueryAvailableData(candidate_hash, response_channel)
- RuntimeApiRequest::SessionIndexForChild
- RuntimeApiRequest::SessionInfo
- RuntimeApiRequest::AvailabilityCores
- `NetworkBridgeMessage::SendRequests(Requests, IfDisconnected::TryConnect)`
- `AvailabilityStore::QueryChunk(candidate_hash, index, response_channel)`
- `AvailabilityStore::StoreChunk(candidate_hash, chunk)`
- `AvailabilityStore::QueryAvailableData(candidate_hash, response_channel)`
- `RuntimeApiRequest::SessionIndexForChild`
- `RuntimeApiRequest::SessionInfo`
- `RuntimeApiRequest::AvailabilityCores`

## Functionality

Expand Down
Expand Up @@ -10,14 +10,14 @@ This version of the availability recovery subsystem is based off of direct conne

Input:

- NetworkBridgeUpdateV1(update)
- AvailabilityRecoveryMessage::RecoverAvailableData(candidate, session, backing_group, response)
- `NetworkBridgeUpdateV1(update)`
- `AvailabilityRecoveryMessage::RecoverAvailableData(candidate, session, backing_group, response)`

Output:

- NetworkBridge::SendValidationMessage
- NetworkBridge::ReportPeer
- AvailabilityStore::QueryChunk
- `NetworkBridge::SendValidationMessage`
- `NetworkBridge::ReportPeer`
- `AvailabilityStore::QueryChunk`

## Functionality

Expand Down Expand Up @@ -51,7 +51,7 @@ struct InteractionParams {
validator_authority_keys: Vec<AuthorityId>,
validators: Vec<ValidatorId>,
// The number of pieces needed.
threshold: usize,
threshold: usize,
candidate_hash: Hash,
erasure_root: Hash,
}
Expand All @@ -65,7 +65,7 @@ enum InteractionPhase {
RequestChunks {
// a random shuffling of the validators which indicates the order in which we connect to the validators and
// request the chunk from them.
shuffling: Vec<ValidatorIndex>,
shuffling: Vec<ValidatorIndex>,
received_chunks: Map<ValidatorIndex, ErasureChunk>,
requesting_chunks: FuturesUnordered<Receiver<ErasureChunkRequestResponse>>,
}
Expand All @@ -90,15 +90,15 @@ On `Conclude`, shut down the subsystem.

1. Check the `availability_lru` for the candidate and return the data if so.
1. Check if there is already an interaction handle for the request. If so, add the response handle to it.
1. Otherwise, load the session info for the given session under the state of `live_block_hash`, and initiate an interaction with *launch_interaction*. Add an interaction handle to the state and add the response channel to it.
1. Otherwise, load the session info for the given session under the state of `live_block_hash`, and initiate an interaction with *`launch_interaction`*. Add an interaction handle to the state and add the response channel to it.
1. If the session info is not available, return `RecoveryError::Unavailable` on the response channel.

### From-interaction logic

#### `FromInteraction::Concluded`

1. Load the entry from the `interactions` map. It should always exist, if not for logic errors. Send the result to each member of `awaiting`.
1. Add the entry to the availability_lru.
1. Add the entry to the `availability_lru`.

### Interaction logic

Expand All @@ -123,12 +123,12 @@ const N_PARALLEL: usize = 50;
* Request `AvailabilityStoreMessage::QueryAvailableData`. If it exists, return that.
* If the phase is `InteractionPhase::RequestFromBackers`
* Loop:
* If the `requesting_pov` is `Some`, poll for updates on it. If it concludes, set `requesting_pov` to `None`.
* If the `requesting_pov` is `Some`, poll for updates on it. If it concludes, set `requesting_pov` to `None`.
* If the `requesting_pov` is `None`, take the next backer off the `shuffled_backers`.
* If the backer is `Some`, issue a `NetworkBridgeMessage::Requests` with a network request for the `AvailableData` and wait for the response.
* If it concludes with a `None` result, return to beginning.
* If it concludes with available data, attempt a re-encoding.
* If it has the correct erasure-root, break and issue a `Ok(available_data)`.
* If it concludes with a `None` result, return to beginning.
* If it concludes with available data, attempt a re-encoding.
* If it has the correct erasure-root, break and issue a `Ok(available_data)`.
* If it has an incorrect erasure-root, return to beginning.
* If the backer is `None`, set the phase to `InteractionPhase::RequestChunks` with a random shuffling of validators and empty `next_shuffling`, `received_chunks`, and `requesting_chunks` and break the loop.

Expand Down
Expand Up @@ -10,8 +10,8 @@ There is no dedicated input mechanism for bitfield signing. Instead, Bitfield Si

Output:

- BitfieldDistribution::DistributeBitfield: distribute a locally signed bitfield
- AvailabilityStore::QueryChunk(CandidateHash, validator_index, response_channel)
- `BitfieldDistribution::DistributeBitfield`: distribute a locally signed bitfield
- `AvailabilityStore::QueryChunk(CandidateHash, validator_index, response_channel)`

## Functionality

Expand Down
Expand Up @@ -114,7 +114,7 @@ fn spawn_validation_work(candidate, parachain head, validation function) {
}
```

### Fetch Pov Block
### Fetch PoV Block

Create a `(sender, receiver)` pair.
Dispatch a [`AvailabilityDistributionMessage`][ADM]`::FetchPoV{ validator_index, pov_hash, candidate_hash, tx, } and listen on the passed receiver for a response. Availability distribution will send the request to the validator specified by `validator_index`, which might not be serving it for whatever reasons, therefore we need to retry with other backing validators in that case.
Expand Down
Expand Up @@ -8,20 +8,20 @@ The Statement Distribution Subsystem is responsible for distributing statements

Input:

- NetworkBridgeUpdate(update)
- StatementDistributionMessage
- `NetworkBridgeUpdate(update)`
- `StatementDistributionMessage`

Output:

- NetworkBridge::SendMessage(`[PeerId]`, message)
- NetworkBridge::SendRequests (StatementFetching)
- NetworkBridge::ReportPeer(PeerId, cost_or_benefit)
- `NetworkBridge::SendMessage(PeerId, message)`
- `NetworkBridge::SendRequests(StatementFetching)`
- `NetworkBridge::ReportPeer(PeerId, cost_or_benefit)`

## Functionality

Implemented as a gossip protocol. Handle updates to our view and peers' views. Neighbor packets are used to inform peers which chain heads we are interested in data for.

It is responsible for distributing signed statements that we have generated and forwarding them, and for detecting a variety of Validator misbehaviors for reporting to [Misbehavior Arbitration](../utility/misbehavior-arbitration.md). During the Backing stage of the inclusion pipeline, it's the main point of contact with peer nodes. On receiving a signed statement from a peer in the same backing group, assuming the peer receipt state machine is in an appropriate state, it sends the Candidate Receipt to the [Candidate Backing subsystem](candidate-backing.md) to handle the validator's statement. On receiving `StatementDistributionMessage::Share` we make sure to send messages to our backing group in addition to random other peers, to ensure a fast backing process and getting all statements quickly for distribtution.
It is responsible for distributing signed statements that we have generated and forwarding them, and for detecting a variety of Validator misbehaviors for reporting to [Misbehavior Arbitration](../utility/misbehavior-arbitration.md). During the Backing stage of the inclusion pipeline, it's the main point of contact with peer nodes. On receiving a signed statement from a peer in the same backing group, assuming the peer receipt state machine is in an appropriate state, it sends the Candidate Receipt to the [Candidate Backing subsystem](candidate-backing.md) to handle the validator's statement. On receiving `StatementDistributionMessage::Share` we make sure to send messages to our backing group in addition to random other peers, to ensure a fast backing process and getting all statements quickly for distribution.

Track equivocating validators and stop accepting information from them. Establish a data-dependency order:

Expand Down Expand Up @@ -71,7 +71,7 @@ The simple approach is to say that we only receive up to two `Seconded` statemen

With that in mind, this simple approach has a caveat worth digging deeper into.

First: We may be aware of two equivocated `Seconded` statements issued by a validator. A totally honest peer of ours can also be aware of one or two different `Seconded` statements issued by the same validator. And yet another peer may be aware of one or two _more_ `Seconded` statements. And so on. This interacts badly with pre-emptive sending logic. Upon sending a `Seconded` statement to a peer, we will want to pre-emptively follow up with all statements relative to that candidate. Waiting for acknowledgement introduces latency at every hop, so that is best avoided. What can happen is that upon receipt of the `Seconded` statement, the peer will discard it as it falls beyond the bound of 2 that it is allowed to store. It cannot store anything in memory about discarded candidates as that would introduce a DoS vector. Then, the peer would receive from us all of the statements pertaining to that candidate, which, from its perspective, would be undesired - they are data-dependent on the `Seconded` statement we sent them, but they have erased all record of that from their memory. Upon receiving a potential flood of undesired statements, this 100% honest peer may choose to disconnect from us. In this way, an adversary may be able to partition the network with careful distribution of equivocated `Seconded` statements.
First: We may be aware of two equivocated `Seconded` statements issued by a validator. A totally honest peer of ours can also be aware of one or two different `Seconded` statements issued by the same validator. And yet another peer may be aware of one or two _more_ `Seconded` statements. And so on. This interacts badly with pre-emptive sending logic. Upon sending a `Seconded` statement to a peer, we will want to pre-emptively follow up with all statements relative to that candidate. Waiting for acknowledgment introduces latency at every hop, so that is best avoided. What can happen is that upon receipt of the `Seconded` statement, the peer will discard it as it falls beyond the bound of 2 that it is allowed to store. It cannot store anything in memory about discarded candidates as that would introduce a DoS vector. Then, the peer would receive from us all of the statements pertaining to that candidate, which, from its perspective, would be undesired - they are data-dependent on the `Seconded` statement we sent them, but they have erased all record of that from their memory. Upon receiving a potential flood of undesired statements, this 100% honest peer may choose to disconnect from us. In this way, an adversary may be able to partition the network with careful distribution of equivocated `Seconded` statements.

The fix is to track, per-peer, the hashes of up to 4 candidates per validator (per relay-parent) that the peer is aware of. It is 4 because we may send them 2 and they may send us 2 different ones. We track the data that they are aware of as the union of things we have sent them and things they have sent us. If we receive a 1st or 2nd `Seconded` statement from a peer, we note it in the peer's known candidates even if we do disregard the data locally. And then, upon receipt of any data dependent on that statement, we do not reduce that peer's standing in our eyes, as the data was not undesired.

Expand Down

0 comments on commit 33fe763

Please sign in to comment.