Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

On-demand & Coretime parachain upgrade fees #2372

Closed
wants to merge 109 commits into from

Conversation

Szegoo
Copy link
Contributor

@Szegoo Szegoo commented Nov 16, 2023

Summary

When a parachain is registered, the parachain manager is responsible for the cost of storing the validation code on-chain. After successful registration in the current system, parachains are allowed to perform code upgrades without being required to pay for the differences in validation code size. This means that someone could theoretically register a parachain with minimal code and then, at a later point, freely register validation code that is significantly larger than the original code.

This is obviously a problem that needs to be fixed, as being imprecise about the deposit requirements for storing data on-chain can lead to issues in the system.

Additionally, this PR introduces a 'code upgrade fee', that is charged whenever an upgrade is scheduled. This is used to discourage parachains from spamming code upgrades.

Paying for para upgrades

All on-demand and Coretime parachains will be charged for performing validation code upgrades. Upon registration, the billing account responsible for covering the deposit and paying upgrade fees is set to None.
Before scheduling code upgrades the billing account has to be explicitly set by calling the set_parachain_billing_account_to_self extrinsic which will allow setting the billing account to either the sovereign account or the manager.

Whenever the billing account of a parachain changes, the new account will have the entire required deposit reserved, and the previous account will have its associated deposit unreserved.

Rebate

If the new required deposit for storing the validation code changes during a code upgrade, the billing account will receive a refund once the upgrade is successfully performed

System & Lease holding parachains

Since we want to avoid causing a breaking change, all current lease-holding parachains that are registered will be eligible to continue performing code upgrades without any costs.

Since this PR allows Root to perform upgrades for free, it also means all system chains do not require any code upgrade costs.

Additional benefits

These additional deposit requirements will enable us to reduce the deposit needed for reserving a ParaId. Without this change, if we significantly lower it, there's a risk of making it cheap to exploit the network by registering a bunch of paras with small validation codes and then upgrading them for free to a much larger size.

Should Close: #669

TODOs:

  • Add events
  • Improve docs
  • Merge BillingInfo into ParaInfo
  • Add test to ensure that the billing account has to be explicitly set for lease holding paras
  • Clean up the code
  • Generate weights
  • Update PR description to explain the setting of the billing account in more detail

Copy link
Member

@eskimor eskimor left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks sensible on a quick fist pass. 🚀

polkadot/runtime/common/src/paras_registrar/mod.rs Outdated Show resolved Hide resolved
@Szegoo Szegoo changed the title Additional deposit requirement when doing para upgrades Parachain upgrade fees Nov 21, 2023
@Szegoo Szegoo marked this pull request as ready for review November 21, 2023 14:23
@Szegoo Szegoo requested a review from a team as a code owner November 21, 2023 14:23
@paritytech-review-bot paritytech-review-bot bot requested a review from a team November 21, 2023 14:23
@Szegoo
Copy link
Contributor Author

Szegoo commented Nov 21, 2023

@eskimor @antonva The PR should be ready for a review now :)

Copy link
Member

@bkchr bkchr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Main logic looks good to me. I found one bug where we don't tell the parachain of the failed upgrade. The other things are mainly cosmetic stuff.

One thing I thought about is devex of this. People wanting to try runtime upgrades will be surprised. Maybe we should at least change zombienet to give each parachain sovereign some funds.

@@ -4,7 +4,7 @@ bootnode = true

[relaychain]
default_image = "{{ZOMBIENET_INTEGRATION_TEST_IMAGE}}"
chain = "rococo-local"
chain = "westend-local"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why was this done? I assume to not require paying fees for the update? If yes, we should revert and ensure that the parachain has the funds to pay for it. This will directly make this a test for the feature of this pr ;)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ahh just have seen that you already have discussed this.

@@ -3,7 +3,7 @@ default_image = "{{RELAY_IMAGE}}"
default_command = "polkadot"
default_args = [ "-lparachain=debug" ]

chain = "rococo-local"
chain = "westend-local"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same :P

While I'm now asking myself why we have two of these tests.

polkadot/runtime/common/src/paras_registrar/mod.rs Outdated Show resolved Hide resolved
polkadot/runtime/common/src/paras_registrar/migration.rs Outdated Show resolved Hide resolved
polkadot/runtime/common/src/paras_registrar/mod.rs Outdated Show resolved Hide resolved
polkadot/runtime/common/src/paras_registrar/mod.rs Outdated Show resolved Hide resolved
let new_deposit = Self::required_para_deposit(head.0.len(), new_code.0.len());
let current_deposit = info.deposit;

let lease_holding = Self::is_parachain(para);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should leave some comment on why this is correct right now. In the future this will maybe change.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is specifically to handle legacy lease holding chains, once they are gone, this code can be removed.

CodeUpgradeScheduleError::FailedToReserveDeposit,
));
// An overestimate of the used weight, but it's better to be safe than sorry.
return Err(<T as Config>::WeightInfo::pre_code_upgrade())
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we on purpose not refund the withdrawn fee above? If not, we could run both in a storage transaction and revert both if one failed.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Made this transactional, should make more sense now.

return Err(<T as Config>::WeightInfo::pre_code_upgrade())
};

if let Err(_) = <T as Config>::Currency::withdraw(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Burning the fee here is what we want to do?

Szegoo and others added 9 commits January 18, 2024 17:09
Co-authored-by: Bastian Köcher <git@kchr.de>
Co-authored-by: Bastian Köcher <git@kchr.de>
Co-authored-by: Bastian Köcher <git@kchr.de>
Co-authored-by: Bastian Köcher <git@kchr.de>
Co-authored-by: Bastian Köcher <git@kchr.de>
Co-authored-by: Bastian Köcher <git@kchr.de>
Co-authored-by: Bastian Köcher <git@kchr.de>
Co-authored-by: Bastian Köcher <git@kchr.de>
Co-authored-by: Bastian Köcher <git@kchr.de>
@bkchr
Copy link
Member

bkchr commented Jan 18, 2024

Long-term, we should have Zombienet register parachains the way they are in production (using the registrar).
However, for the purposes of these tests, we can simply use another relay chain runtime that doesn't require upgrade fees.

For now you can just write a custom js script that is called by zombienet that calls force_register in registrar. In the future we don't need any Zombienet things for this, we should start to clean up this mess with para id and paras being in different pallets etc. In general with coretime the entire parachain/parathread distinction is not correct anymore and needs to be removed.

@paritytech-cicd-pr
Copy link

The CI pipeline was cancelled due to failure one of the required jobs.
Job name: cargo-clippy
Logs: https://gitlab.parity.io/parity/mirrors/polkadot-sdk/-/jobs/4958532

let now = <frame_system::Pallet<T>>::block_number();

weight.saturating_add(<paras::Pallet<T>>::schedule_code_upgrade(
weight.saturating_accrue(Self::try_schedule_code_upgrade(
Copy link
Member

@eskimor eskimor Jan 19, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I just realized that this is too late. We need to drop the candidate in backing already, but in a way that should be safe. Mainly there exists a mechanism for this already: UpgradeRestrictionSignal. If an upgrade is not allowed for whatever reason this signal should be set. This is read by the parachain and it won't even try (which is mandatory otherwise it would be bricked, if we only had the filtering of candidates here, called here from verify_backed_candidate).

Reason being: If we only drop the upgrade in inclusion we did not actually prevent the spam vector, as most of the work has already been done. Most importantly a relay chain block was already filled with megabytes of data, affecting other candidates.

Also we should update the guide as well. In particular I forgot again why we need fees at all:

  1. Deposit is large for large runtimes.
  2. Frequency of updates is limited by cool down.
  3. Block validation is already being paid for.

I think the major reason is that upgrading a runtime is rather costly, not only in terms of block utilization (size), but also because of pre-checking. This essentially breaks sharding (all validators do it). So it makes sense to be a bit cautious here. The more important part of this PR is surely that we adjust the deposit.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In order to not delay this further, let's make this a follow up.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I just realized that this is too late. We need to drop the candidate in backing already, but in a way that should be safe. Mainly there exists a mechanism for this already: UpgradeRestrictionSignal. If an upgrade is not allowed for whatever reason this signal should be set. This is read by the parachain and it won't even try (which is mandatory otherwise it would be bricked, if we only had the filtering of candidates here, called here from verify_backed_candidate).

This is not really true, UpgradeRestrictionSignal is for telling the parachain that it is currently not allowed to do an upgrade. However, we currently have no way to signal that we want to do an upgrade to the relay chain without sending the full new wasm. You are right that the spam vector isn't closed with the current implementation and we should fix it. I mean we could maybe abuse UpgradeRestrictionSignal, but then we should entirely drop the dynamic reserve based on the size of the wasm blob. We should instead charge a constant fee for this (probably just based on the maximum blob size). Then we could use the UpgradeRestrictionSignal to communicate if the billing account has the required UpgradeFee. But yeah, still not a super solution. Someone could still announce an upgrade and move the funds in between backing and availability.

The real proper solution is really to rewrite the current signaling to the relay chain to first only announce the size plus hash of the blob. Then we should reserve the amount on the billing account and signal that it is okay to send the code.

Copy link
Member

@eskimor eskimor Jan 19, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed deposit would be indeed a lot simpler, I see two downsides with it:

  1. People are not incentivized to save space.
  2. If we ever increase the limit, those earlier parachains can now increase their size for free.

both are not dramatic and also if we would ever find those to be a problem we can still write code to adjust it on upgrades.

I hate to say this, but maybe we should just adjust the deposit to fixed maximum for now and get back to this later, e.g. with me actually writing a solid design for this first (proper signaling, actual concerns, potential solutions, actually picked solution with reason) and ideally also then also clean up the code while we are at it.

Copy link
Contributor Author

@Szegoo Szegoo Jan 19, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Probably a dumb question, but what if we restricted code upgrades to only be scheduled from the registrar pallet through the extrinsic? This way the spam vector wouldn't be possible.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe let's see what @eskimor thinks.

I had similar thoughts: Is the complexity really worth it?

@Szegoo has a very good point about incentivization: If the upgrade costs something, then we incentivize teams to not "needlessly" upgrade, which is very sensible as there is a significant cost for the network. But I like @bkchr idea of having a very long cooldown period (which is enforced also on failed upgrades) and then you could optionally pay to reduce the cooldown. This would get us all the benefits, without risking irritation about having to set up some account and stuff. People would only need to look into it once they are unhappy with the cooldown time.

So the simplest way forward would be:

Phase 1:

  1. Make the deposit always account for the maximum code size.
  2. Increase cooldown period .... actually this is also something we would need to communicate ... but likely less irritating than suddenly having to setup an account.
  3. Make cooldown to also be enforced on failed updates.

Phase 2:

Then we can have a second (not time critical) phase:

This phase is then merely an optimization, as opposed to a necessity. Hence we should also write an RFC here (design document), highlighting exactly all the requirements and options considered.

  1. Find good solution for deposit size: Having it dependent on actual size is a good thing in general as it just aligns incentives with cost, leading to better utilization of resources.
  2. Implement/adjust what we already have in this PR to enable being able to pay for a reduced cooldown.
  3. See downsides: For rent based model/reduced deposit fees, upgrade fees might be mandatory.

Downsides

If we don't charge for upgrades (usually), then this also increases pressure on deposits not getting too low. As deposits then serve two purposes:

  1. Limit state used (obviously).
  2. Limit number of upgrades that can happen (together with cooldown)

Phase 1: Should be fast to implement ... if not, maybe we can rely on the large deposit for now and move (2) and (3) to phase 2 as well. Just changing the deposit calculation (making it bigger) should not even require a security audit.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For the rent based model. Indeed there was quite some confusing on pruning, I don't think Gav nor Shawn understood what was actually meant here. Pruning was specifically designed to not be able to break anything - at least not anymore than an ending lease would for example.

The real question whether the rent based model is feasible is, whether we are for security mostly relying on the opportunity costs of the provided deposit or the actually locked capital. In a very first analysis of mine, it is unfortunately in part also the amount of locked capital: Basically if you fill up the storage, managing to degrade the service enough, this would have an impact on the price of DOT, so your deposited DOT would decrease in value -> Disincentivizing the attack.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For moving forward in launching coretime on Kusama, @Szegoo would you be interested into implementing phase 1? Phase 1 MVP, would entail only changing the deposit amount to the maximum PVF size.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

3. Make cooldown to also be enforced on failed updates.

This is already the case, at least I checked this last week. We set the cooldown before we kick off the pvf check.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Szegoo would you be interested into implementing phase 1?

Yes, I will open a PR for that.

github-merge-queue bot pushed a commit that referenced this pull request Jan 24, 2024
This PR implements phase 1 of:
#2372 (comment)

NOTE: This means that all the current parachains can upgrade their code
to the maximum size for free.

---------

Co-authored-by: Bastian Köcher <git@kchr.de>
Co-authored-by: Radha <86818441+DrW3RK@users.noreply.github.com>
@bkchr bkchr closed this May 2, 2024
bkontur pushed a commit that referenced this pull request May 20, 2024
… pallet-xcm-bridge-hub (#2261)

Xcm bridge hub router v2 (backport to master branch) (#2312)

* copy new pallet (palle-xcm-bridge-hub-router) from dynamic-fees-v1 branch

* added remaining traces of pallet-xcm-bridge-hub-router

* added comment about sharing delivery fee factor between all bridges, opened by this chain

* spelling

* clippy

Implement additional require primitives for dynamic fees directly for pallet-xcm-bridge-hub (#2261)

* added backoff mechanism to inbound bridge queue

* impl backpressure in the XcmBlobHaulerAdapter

* leave TODOs

* BridgeMessageProcessor prototype

* another TODO

* Revert "also temporary (?) remove BridgesByLocalOrigin because the storage format will likely change to be able to resume bridges from the on_iniitalize/on_idle"

This reverts commit bdd7ae11a8942b58c5db6ac6d4e7922aa28cece4.

* prototype for QueuePausedQuery

* implement ExportXcm and MessageDispatch for pallet-xcm-bridge-hub

* spelling

* flush

* small comments to myself

* more backports from dynamic-fees-v1

* use new pallet as exporter and dispatcher in Millau

* use new pallet as exporter and dispatcher in Rialto

* use new pallet as exporter and dispatcher in RialtoParachain

* flush

* fix remaining compilation issues

* warnings + fmt

* fix tests

* LocalXcmChannelManager

* change lane ids

* it works!

* remove bp-xcm-bridge-hub-router and use LocalXcmChannelManager everywhere

* removed commented code

* cleaning up

* cleaning up

* cleaning up

* - separated BridgeId and LaneId
- BridgeId now uses versioned universal locations
- added missing stuff to exporter.rs

* OnMessagesDelivered is back

* start using bp-xcm-bridge-hub as OnMessagesDelivered

* cleaning up

* spelling

* fix stupid issues

* Backport latest relevant dynamic fees changes from v1 to v2 (#2372)

* backport latest relevant dynamic fees changes from v1 to v2

* fix comment

Added remaining unit tests for pallet-xcm-bridge-hub (#2499)

* added backoff mechanism to inbound bridge queue

* impl backpressure in the XcmBlobHaulerAdapter

* leave TODOs

* BridgeMessageProcessor prototype

* another TODO

* Revert "also temporary (?) remove BridgesByLocalOrigin because the storage format will likely change to be able to resume bridges from the on_iniitalize/on_idle"

This reverts commit bdd7ae11a8942b58c5db6ac6d4e7922aa28cece4.

* prototype for QueuePausedQuery

* implement ExportXcm and MessageDispatch for pallet-xcm-bridge-hub

* spelling

* flush

* small comments to myself

* more backports from dynamic-fees-v1

* use new pallet as exporter and dispatcher in Millau

* use new pallet as exporter and dispatcher in Rialto

* use new pallet as exporter and dispatcher in RialtoParachain

* flush

* fix remaining compilation issues

* warnings + fmt

* fix tests

* LocalXcmChannelManager

* change lane ids

* it works!

* remove bp-xcm-bridge-hub-router and use LocalXcmChannelManager everywhere

* removed commented code

* cleaning up

* cleaning up

* cleaning up

* - separated BridgeId and LaneId
- BridgeId now uses versioned universal locations
- added missing stuff to exporter.rs

* OnMessagesDelivered is back

* start using bp-xcm-bridge-hub as OnMessagesDelivered

* cleaning up

* spelling

* fix stupid issues

* added remaining unit tests for pallet-xcm-bridge-hub

fixed benchmarks (#2504)

Remove pallet_xcm_bridge_hub::SuspendedBridges (#2505)

* remove pallet_xcm_bridge_hub::SuspendedBridges

* apply review suggestions
bkontur pushed a commit that referenced this pull request May 20, 2024
… pallet-xcm-bridge-hub (#2261)

Xcm bridge hub router v2 (backport to master branch) (#2312)

* copy new pallet (palle-xcm-bridge-hub-router) from dynamic-fees-v1 branch

* added remaining traces of pallet-xcm-bridge-hub-router

* added comment about sharing delivery fee factor between all bridges, opened by this chain

* spelling

* clippy

Implement additional require primitives for dynamic fees directly for pallet-xcm-bridge-hub (#2261)

* added backoff mechanism to inbound bridge queue

* impl backpressure in the XcmBlobHaulerAdapter

* leave TODOs

* BridgeMessageProcessor prototype

* another TODO

* Revert "also temporary (?) remove BridgesByLocalOrigin because the storage format will likely change to be able to resume bridges from the on_iniitalize/on_idle"

This reverts commit bdd7ae11a8942b58c5db6ac6d4e7922aa28cece4.

* prototype for QueuePausedQuery

* implement ExportXcm and MessageDispatch for pallet-xcm-bridge-hub

* spelling

* flush

* small comments to myself

* more backports from dynamic-fees-v1

* use new pallet as exporter and dispatcher in Millau

* use new pallet as exporter and dispatcher in Rialto

* use new pallet as exporter and dispatcher in RialtoParachain

* flush

* fix remaining compilation issues

* warnings + fmt

* fix tests

* LocalXcmChannelManager

* change lane ids

* it works!

* remove bp-xcm-bridge-hub-router and use LocalXcmChannelManager everywhere

* removed commented code

* cleaning up

* cleaning up

* cleaning up

* - separated BridgeId and LaneId
- BridgeId now uses versioned universal locations
- added missing stuff to exporter.rs

* OnMessagesDelivered is back

* start using bp-xcm-bridge-hub as OnMessagesDelivered

* cleaning up

* spelling

* fix stupid issues

* Backport latest relevant dynamic fees changes from v1 to v2 (#2372)

* backport latest relevant dynamic fees changes from v1 to v2

* fix comment

Added remaining unit tests for pallet-xcm-bridge-hub (#2499)

* added backoff mechanism to inbound bridge queue

* impl backpressure in the XcmBlobHaulerAdapter

* leave TODOs

* BridgeMessageProcessor prototype

* another TODO

* Revert "also temporary (?) remove BridgesByLocalOrigin because the storage format will likely change to be able to resume bridges from the on_iniitalize/on_idle"

This reverts commit bdd7ae11a8942b58c5db6ac6d4e7922aa28cece4.

* prototype for QueuePausedQuery

* implement ExportXcm and MessageDispatch for pallet-xcm-bridge-hub

* spelling

* flush

* small comments to myself

* more backports from dynamic-fees-v1

* use new pallet as exporter and dispatcher in Millau

* use new pallet as exporter and dispatcher in Rialto

* use new pallet as exporter and dispatcher in RialtoParachain

* flush

* fix remaining compilation issues

* warnings + fmt

* fix tests

* LocalXcmChannelManager

* change lane ids

* it works!

* remove bp-xcm-bridge-hub-router and use LocalXcmChannelManager everywhere

* removed commented code

* cleaning up

* cleaning up

* cleaning up

* - separated BridgeId and LaneId
- BridgeId now uses versioned universal locations
- added missing stuff to exporter.rs

* OnMessagesDelivered is back

* start using bp-xcm-bridge-hub as OnMessagesDelivered

* cleaning up

* spelling

* fix stupid issues

* added remaining unit tests for pallet-xcm-bridge-hub

fixed benchmarks (#2504)

Remove pallet_xcm_bridge_hub::SuspendedBridges (#2505)

* remove pallet_xcm_bridge_hub::SuspendedBridges

* apply review suggestions
bkontur pushed a commit that referenced this pull request May 21, 2024
… pallet-xcm-bridge-hub (#2261)

Xcm bridge hub router v2 (backport to master branch) (#2312)

* copy new pallet (palle-xcm-bridge-hub-router) from dynamic-fees-v1 branch

* added remaining traces of pallet-xcm-bridge-hub-router

* added comment about sharing delivery fee factor between all bridges, opened by this chain

* spelling

* clippy

Implement additional require primitives for dynamic fees directly for pallet-xcm-bridge-hub (#2261)

* added backoff mechanism to inbound bridge queue

* impl backpressure in the XcmBlobHaulerAdapter

* leave TODOs

* BridgeMessageProcessor prototype

* another TODO

* Revert "also temporary (?) remove BridgesByLocalOrigin because the storage format will likely change to be able to resume bridges from the on_iniitalize/on_idle"

This reverts commit bdd7ae11a8942b58c5db6ac6d4e7922aa28cece4.

* prototype for QueuePausedQuery

* implement ExportXcm and MessageDispatch for pallet-xcm-bridge-hub

* spelling

* flush

* small comments to myself

* more backports from dynamic-fees-v1

* use new pallet as exporter and dispatcher in Millau

* use new pallet as exporter and dispatcher in Rialto

* use new pallet as exporter and dispatcher in RialtoParachain

* flush

* fix remaining compilation issues

* warnings + fmt

* fix tests

* LocalXcmChannelManager

* change lane ids

* it works!

* remove bp-xcm-bridge-hub-router and use LocalXcmChannelManager everywhere

* removed commented code

* cleaning up

* cleaning up

* cleaning up

* - separated BridgeId and LaneId
- BridgeId now uses versioned universal locations
- added missing stuff to exporter.rs

* OnMessagesDelivered is back

* start using bp-xcm-bridge-hub as OnMessagesDelivered

* cleaning up

* spelling

* fix stupid issues

* Backport latest relevant dynamic fees changes from v1 to v2 (#2372)

* backport latest relevant dynamic fees changes from v1 to v2

* fix comment

Added remaining unit tests for pallet-xcm-bridge-hub (#2499)

* added backoff mechanism to inbound bridge queue

* impl backpressure in the XcmBlobHaulerAdapter

* leave TODOs

* BridgeMessageProcessor prototype

* another TODO

* Revert "also temporary (?) remove BridgesByLocalOrigin because the storage format will likely change to be able to resume bridges from the on_iniitalize/on_idle"

This reverts commit bdd7ae11a8942b58c5db6ac6d4e7922aa28cece4.

* prototype for QueuePausedQuery

* implement ExportXcm and MessageDispatch for pallet-xcm-bridge-hub

* spelling

* flush

* small comments to myself

* more backports from dynamic-fees-v1

* use new pallet as exporter and dispatcher in Millau

* use new pallet as exporter and dispatcher in Rialto

* use new pallet as exporter and dispatcher in RialtoParachain

* flush

* fix remaining compilation issues

* warnings + fmt

* fix tests

* LocalXcmChannelManager

* change lane ids

* it works!

* remove bp-xcm-bridge-hub-router and use LocalXcmChannelManager everywhere

* removed commented code

* cleaning up

* cleaning up

* cleaning up

* - separated BridgeId and LaneId
- BridgeId now uses versioned universal locations
- added missing stuff to exporter.rs

* OnMessagesDelivered is back

* start using bp-xcm-bridge-hub as OnMessagesDelivered

* cleaning up

* spelling

* fix stupid issues

* added remaining unit tests for pallet-xcm-bridge-hub

fixed benchmarks (#2504)

Remove pallet_xcm_bridge_hub::SuspendedBridges (#2505)

* remove pallet_xcm_bridge_hub::SuspendedBridges

* apply review suggestions
bkontur pushed a commit that referenced this pull request May 22, 2024
… pallet-xcm-bridge-hub (#2261)

Xcm bridge hub router v2 (backport to master branch) (#2312)

* copy new pallet (palle-xcm-bridge-hub-router) from dynamic-fees-v1 branch

* added remaining traces of pallet-xcm-bridge-hub-router

* added comment about sharing delivery fee factor between all bridges, opened by this chain

* spelling

* clippy

Implement additional require primitives for dynamic fees directly for pallet-xcm-bridge-hub (#2261)

* added backoff mechanism to inbound bridge queue

* impl backpressure in the XcmBlobHaulerAdapter

* leave TODOs

* BridgeMessageProcessor prototype

* another TODO

* Revert "also temporary (?) remove BridgesByLocalOrigin because the storage format will likely change to be able to resume bridges from the on_iniitalize/on_idle"

This reverts commit bdd7ae11a8942b58c5db6ac6d4e7922aa28cece4.

* prototype for QueuePausedQuery

* implement ExportXcm and MessageDispatch for pallet-xcm-bridge-hub

* spelling

* flush

* small comments to myself

* more backports from dynamic-fees-v1

* use new pallet as exporter and dispatcher in Millau

* use new pallet as exporter and dispatcher in Rialto

* use new pallet as exporter and dispatcher in RialtoParachain

* flush

* fix remaining compilation issues

* warnings + fmt

* fix tests

* LocalXcmChannelManager

* change lane ids

* it works!

* remove bp-xcm-bridge-hub-router and use LocalXcmChannelManager everywhere

* removed commented code

* cleaning up

* cleaning up

* cleaning up

* - separated BridgeId and LaneId
- BridgeId now uses versioned universal locations
- added missing stuff to exporter.rs

* OnMessagesDelivered is back

* start using bp-xcm-bridge-hub as OnMessagesDelivered

* cleaning up

* spelling

* fix stupid issues

* Backport latest relevant dynamic fees changes from v1 to v2 (#2372)

* backport latest relevant dynamic fees changes from v1 to v2

* fix comment

Added remaining unit tests for pallet-xcm-bridge-hub (#2499)

* added backoff mechanism to inbound bridge queue

* impl backpressure in the XcmBlobHaulerAdapter

* leave TODOs

* BridgeMessageProcessor prototype

* another TODO

* Revert "also temporary (?) remove BridgesByLocalOrigin because the storage format will likely change to be able to resume bridges from the on_iniitalize/on_idle"

This reverts commit bdd7ae11a8942b58c5db6ac6d4e7922aa28cece4.

* prototype for QueuePausedQuery

* implement ExportXcm and MessageDispatch for pallet-xcm-bridge-hub

* spelling

* flush

* small comments to myself

* more backports from dynamic-fees-v1

* use new pallet as exporter and dispatcher in Millau

* use new pallet as exporter and dispatcher in Rialto

* use new pallet as exporter and dispatcher in RialtoParachain

* flush

* fix remaining compilation issues

* warnings + fmt

* fix tests

* LocalXcmChannelManager

* change lane ids

* it works!

* remove bp-xcm-bridge-hub-router and use LocalXcmChannelManager everywhere

* removed commented code

* cleaning up

* cleaning up

* cleaning up

* - separated BridgeId and LaneId
- BridgeId now uses versioned universal locations
- added missing stuff to exporter.rs

* OnMessagesDelivered is back

* start using bp-xcm-bridge-hub as OnMessagesDelivered

* cleaning up

* spelling

* fix stupid issues

* added remaining unit tests for pallet-xcm-bridge-hub

fixed benchmarks (#2504)

Remove pallet_xcm_bridge_hub::SuspendedBridges (#2505)

* remove pallet_xcm_bridge_hub::SuspendedBridges

* apply review suggestions
bkontur pushed a commit that referenced this pull request May 23, 2024
… pallet-xcm-bridge-hub (#2261)

Xcm bridge hub router v2 (backport to master branch) (#2312)

* copy new pallet (palle-xcm-bridge-hub-router) from dynamic-fees-v1 branch

* added remaining traces of pallet-xcm-bridge-hub-router

* added comment about sharing delivery fee factor between all bridges, opened by this chain

* spelling

* clippy

Implement additional require primitives for dynamic fees directly for pallet-xcm-bridge-hub (#2261)

* added backoff mechanism to inbound bridge queue

* impl backpressure in the XcmBlobHaulerAdapter

* leave TODOs

* BridgeMessageProcessor prototype

* another TODO

* Revert "also temporary (?) remove BridgesByLocalOrigin because the storage format will likely change to be able to resume bridges from the on_iniitalize/on_idle"

This reverts commit bdd7ae11a8942b58c5db6ac6d4e7922aa28cece4.

* prototype for QueuePausedQuery

* implement ExportXcm and MessageDispatch for pallet-xcm-bridge-hub

* spelling

* flush

* small comments to myself

* more backports from dynamic-fees-v1

* use new pallet as exporter and dispatcher in Millau

* use new pallet as exporter and dispatcher in Rialto

* use new pallet as exporter and dispatcher in RialtoParachain

* flush

* fix remaining compilation issues

* warnings + fmt

* fix tests

* LocalXcmChannelManager

* change lane ids

* it works!

* remove bp-xcm-bridge-hub-router and use LocalXcmChannelManager everywhere

* removed commented code

* cleaning up

* cleaning up

* cleaning up

* - separated BridgeId and LaneId
- BridgeId now uses versioned universal locations
- added missing stuff to exporter.rs

* OnMessagesDelivered is back

* start using bp-xcm-bridge-hub as OnMessagesDelivered

* cleaning up

* spelling

* fix stupid issues

* Backport latest relevant dynamic fees changes from v1 to v2 (#2372)

* backport latest relevant dynamic fees changes from v1 to v2

* fix comment

Added remaining unit tests for pallet-xcm-bridge-hub (#2499)

* added backoff mechanism to inbound bridge queue

* impl backpressure in the XcmBlobHaulerAdapter

* leave TODOs

* BridgeMessageProcessor prototype

* another TODO

* Revert "also temporary (?) remove BridgesByLocalOrigin because the storage format will likely change to be able to resume bridges from the on_iniitalize/on_idle"

This reverts commit bdd7ae11a8942b58c5db6ac6d4e7922aa28cece4.

* prototype for QueuePausedQuery

* implement ExportXcm and MessageDispatch for pallet-xcm-bridge-hub

* spelling

* flush

* small comments to myself

* more backports from dynamic-fees-v1

* use new pallet as exporter and dispatcher in Millau

* use new pallet as exporter and dispatcher in Rialto

* use new pallet as exporter and dispatcher in RialtoParachain

* flush

* fix remaining compilation issues

* warnings + fmt

* fix tests

* LocalXcmChannelManager

* change lane ids

* it works!

* remove bp-xcm-bridge-hub-router and use LocalXcmChannelManager everywhere

* removed commented code

* cleaning up

* cleaning up

* cleaning up

* - separated BridgeId and LaneId
- BridgeId now uses versioned universal locations
- added missing stuff to exporter.rs

* OnMessagesDelivered is back

* start using bp-xcm-bridge-hub as OnMessagesDelivered

* cleaning up

* spelling

* fix stupid issues

* added remaining unit tests for pallet-xcm-bridge-hub

fixed benchmarks (#2504)

Remove pallet_xcm_bridge_hub::SuspendedBridges (#2505)

* remove pallet_xcm_bridge_hub::SuspendedBridges

* apply review suggestions
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
T8-polkadot This PR/Issue is related to/affects the Polkadot network.
Projects
Status: Audited
Status: Completed
Development

Successfully merging this pull request may close these issues.

Charge core time chains fees for upgrading their runtime code
7 participants