Skip to content
This repository has been archived by the owner on Nov 15, 2023. It is now read-only.

Commit

Permalink
Migrate away from weights in host config
Browse files Browse the repository at this point in the history
  • Loading branch information
gavofyork committed Nov 11, 2022
1 parent 2a12b97 commit 35bdba5
Show file tree
Hide file tree
Showing 21 changed files with 371 additions and 1,470 deletions.
23 changes: 19 additions & 4 deletions Cargo.lock

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

244 changes: 0 additions & 244 deletions Cargo.toml

Large diffs are not rendered by default.

2 changes: 0 additions & 2 deletions node/service/src/chain_spec.rs
Original file line number Diff line number Diff line change
Expand Up @@ -190,8 +190,6 @@ fn default_parachains_host_configuration(
max_upward_queue_count: 8,
max_upward_queue_size: 1024 * 1024,
max_downward_message_size: 1024 * 1024,
ump_service_total_weight: Weight::from_ref_time(100_000_000_000)
.set_proof_size(MAX_POV_SIZE as u64),
max_upward_message_size: 50 * 1024,
max_upward_message_num_per_candidate: 5,
hrmp_sender_deposit: 0,
Expand Down
32 changes: 7 additions & 25 deletions roadmap/implementers-guide/src/runtime/ump.md
Original file line number Diff line number Diff line change
Expand Up @@ -52,36 +52,18 @@ Candidate Acceptance Function:
* `check_upward_messages(P: ParaId, Vec<UpwardMessage>`):
1. Checks that there are at most `config.max_upward_message_num_per_candidate` messages.
1. Checks that no message exceeds `config.max_upward_message_size`.
1. Verify that `RelayDispatchQueueSize` for `P` has enough capacity for the messages
1. Verify that queuing up the messages could not result in exceeding the queue's footprint
according to the config items. The queue's current footprint is provided in well_known_keys
in order to facilitate oraclisation on to the para.

Candidate Enactment:

* `receive_upward_messages(P: ParaId, Vec<UpwardMessage>)`:
1. Process each upward message `M` in order:
1. Append the message to `RelayDispatchQueues` for `P`
1. Increment the size and the count in `RelayDispatchQueueSize` for `P`.
1. Ensure that `P` is present in `NeedsDispatch`.

The following routine is meant to execute pending entries in upward message queues. This function doesn't fail, even if
dispatching any of individual upward messages returns an error.

`process_pending_upward_messages()`:
1. Initialize a cumulative weight counter `T` to 0
1. Iterate over items in `NeedsDispatch` cyclically, starting with `NextDispatchRoundStartWith`. If the item specified is `None` start from the beginning. For each `P` encountered:
1. Dequeue the first upward message `D` from `RelayDispatchQueues` for `P`
1. Decrement the size of the message from `RelayDispatchQueueSize` for `P`
1. Delegate processing of the message to the runtime. The weight consumed is added to `T`.
1. If `T >= config.ump_service_total_weight`, set `NextDispatchRoundStartWith` to `P` and finish processing.
1. If `RelayDispatchQueues` for `P` became empty, remove `P` from `NeedsDispatch`.
1. If `NeedsDispatch` became empty then finish processing and set `NextDispatchRoundStartWith` to `None`.
> NOTE that in practice we would need to approach the weight calculation more thoroughly, i.e. incorporate all operations
> that could take place on the course of handling these upward messages.
1. Place in the dispatch queue according to its para ID (or handle it immediately).

## Session Change

1. For each `P` in `outgoing_paras` (generated by `Paras::on_new_session`):
1. Remove `RelayDispatchQueueSize` of `P`.
1. Remove `RelayDispatchQueues` of `P`.
1. Remove `P` if it exists in `NeedsDispatch`.
1. If `P` is in `NextDispatchRoundStartWith`, then reset it to `None`
- Note that if we don't remove the open/close requests since they are going to die out naturally at the end of the session.
1. Nothing specific needs to be done, however the channel's dispatch queue may possibly be "swept"
which would prevent the dispatch queue from automatically being serviced. This is a consideration
for the chain and specific behaviour is not defined.
5 changes: 0 additions & 5 deletions roadmap/implementers-guide/src/types/runtime.md
Original file line number Diff line number Diff line change
Expand Up @@ -65,11 +65,6 @@ struct HostConfiguration {
/// no further messages may be added to it. If it exceeds this then the queue may contain only
/// a single message.
pub max_upward_queue_size: u32,
/// The amount of weight we wish to devote to the processing the dispatchable upward messages
/// stage.
///
/// NOTE that this is a soft limit and could be exceeded.
pub ump_service_total_weight: Weight,
/// The maximum size of an upward message that can be sent by a candidate.
///
/// This parameter affects the upper bound of size of `CandidateCommitments`.
Expand Down
1 change: 1 addition & 0 deletions runtime/parachains/Cargo.toml
Original file line number Diff line number Diff line change
Expand Up @@ -57,6 +57,7 @@ test-helpers = { package = "polkadot-primitives-test-helpers", path = "../../pri
sp-tracing = { git = "https://github.com/paritytech/substrate", branch = "master" }
thousands = "0.2.0"
assert_matches = "1"
pallet-message-queue = { git = "https://github.com/paritytech/substrate", branch = "master" }

[features]
default = ["std"]
Expand Down
39 changes: 2 additions & 37 deletions runtime/parachains/src/configuration.rs
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@
//! Configuration can change only at session boundaries and is buffered until then.

use crate::shared;
use frame_support::{pallet_prelude::*, weights::constants::WEIGHT_PER_MILLIS};
use frame_support::pallet_prelude::*;
use frame_system::pallet_prelude::*;
use parity_scale_codec::{Decode, Encode};
use primitives::v2::{Balance, SessionIndex, MAX_CODE_SIZE, MAX_HEAD_DATA_SIZE, MAX_POV_SIZE};
Expand Down Expand Up @@ -126,11 +126,6 @@ pub struct HostConfiguration<BlockNumber> {
/// decide to do with its PoV so this value in practice will be picked as a fraction of the PoV
/// size.
pub max_downward_message_size: u32,
/// The amount of weight we wish to devote to the processing the dispatchable upward messages
/// stage.
///
/// NOTE that this is a soft limit and could be exceeded.
pub ump_service_total_weight: Weight,
/// The maximum number of outbound HRMP channels a parachain is allowed to open.
pub hrmp_max_parachain_outbound_channels: u32,
/// The maximum number of outbound HRMP channels a parathread is allowed to open.
Expand Down Expand Up @@ -210,9 +205,6 @@ pub struct HostConfiguration<BlockNumber> {
pub needed_approvals: u32,
/// The number of samples to do of the `RelayVRFModulo` approval assignment criterion.
pub relay_vrf_modulo_samples: u32,
/// The maximum amount of weight any individual upward message may consume. Messages above this
/// weight go into the overweight queue and may only be serviced explicitly.
pub ump_max_individual_weight: Weight,
/// This flag controls whether PVF pre-checking is enabled.
///
/// If the flag is false, the behavior should be exactly the same as prior. Specifically, the
Expand Down Expand Up @@ -272,7 +264,6 @@ impl<BlockNumber: Default + From<u32>> Default for HostConfiguration<BlockNumber
max_upward_queue_count: Default::default(),
max_upward_queue_size: Default::default(),
max_downward_message_size: Default::default(),
ump_service_total_weight: Default::default(),
max_upward_message_size: Default::default(),
max_upward_message_num_per_candidate: Default::default(),
hrmp_sender_deposit: Default::default(),
Expand All @@ -285,8 +276,6 @@ impl<BlockNumber: Default + From<u32>> Default for HostConfiguration<BlockNumber
hrmp_max_parachain_outbound_channels: Default::default(),
hrmp_max_parathread_outbound_channels: Default::default(),
hrmp_max_message_num_per_candidate: Default::default(),
ump_max_individual_weight: (20u64 * WEIGHT_PER_MILLIS)
.set_proof_size(MAX_POV_SIZE as u64),
pvf_checking_enabled: false,
pvf_voting_ttl: 2u32.into(),
minimum_validation_upgrade_delay: 2.into(),
Expand Down Expand Up @@ -391,7 +380,7 @@ where
})
}

if self.max_upward_message_size > crate::ump::MAX_UPWARD_MESSAGE_SIZE_BOUND {
if self.max_upward_message_size > crate::inclusion::MAX_UPWARD_MESSAGE_SIZE_BOUND {
return Err(MaxUpwardMessageSizeExceeded {
max_message_size: self.max_upward_message_size,
})
Expand Down Expand Up @@ -858,18 +847,6 @@ pub mod pallet {
})
}

/// Sets the soft limit for the phase of dispatching dispatchable upward messages.
#[pallet::weight((
T::WeightInfo::set_config_with_weight(),
DispatchClass::Operational,
))]
pub fn set_ump_service_total_weight(origin: OriginFor<T>, new: Weight) -> DispatchResult {
ensure_root(origin)?;
Self::schedule_config_update(|config| {
config.ump_service_total_weight = new;
})
}

/// Sets the maximum size of an upward message that can be sent by a candidate.
#[pallet::weight((
T::WeightInfo::set_config_with_u32(),
Expand Down Expand Up @@ -1044,18 +1021,6 @@ pub mod pallet {
})
}

/// Sets the maximum amount of weight any individual upward message may consume.
#[pallet::weight((
T::WeightInfo::set_config_with_weight(),
DispatchClass::Operational,
))]
pub fn set_ump_max_individual_weight(origin: OriginFor<T>, new: Weight) -> DispatchResult {
ensure_root(origin)?;
Self::schedule_config_update(|config| {
config.ump_max_individual_weight = new;
})
}

/// Enable or disable PVF pre-checking. Consult the field documentation prior executing.
#[pallet::weight((
// Using u32 here is a little bit of cheating, but that should be fine.
Expand Down
2 changes: 0 additions & 2 deletions runtime/parachains/src/configuration/benchmarking.rs
Original file line number Diff line number Diff line change
Expand Up @@ -26,8 +26,6 @@ benchmarks! {

set_config_with_option_u32 {}: set_max_validators(RawOrigin::Root, Some(10))

set_config_with_weight {}: set_ump_service_total_weight(RawOrigin::Root, Weight::from_ref_time(3_000_000))

set_hrmp_open_request_ttl {}: {
Err(BenchmarkError::Override(
BenchmarkResult::from_weight(T::BlockWeights::get().max_block)
Expand Down
Loading

0 comments on commit 35bdba5

Please sign in to comment.