Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Upgrade to latest polkadot-sdk@1.7 release #187

Merged
merged 202 commits into from
Mar 5, 2024

Conversation

bkontur
Copy link
Contributor

@bkontur bkontur commented Feb 14, 2024

Based on bump to polkadot-sdk@1.6.0.

Attached result of cargo upgrade -v --pinned --incompatible cargo-upgrade-version-bump.log

## For reviewers

This PR is against polkadot-fellows's main to bring it to the fellows repo, but if you want to see a real diff relevant to the polkadot-sdk@1.7.0 update please check: bkontur/runtimes@bko-bump-to-1.6...bkontur:runtimes:bko-bump-to-1.7.

TODO

[pallet_identity] removed `FieldDeposit`, `MaxAdditionalFields`, added `ByteDeposit`
fix import `simple` -> `legacy`
…: `common` to `common` and `chains`

chore: remove unneeded deps with `cargo machete` for integration tests
bkontur and others added 7 commits February 29, 2024 09:52
…hip/mod.rs

Co-authored-by: joe petrowski <25483142+joepetrowski@users.noreply.github.com>
Co-authored-by: joe petrowski <25483142+joepetrowski@users.noreply.github.com>
Co-authored-by: joe petrowski <25483142+joepetrowski@users.noreply.github.com>
Co-authored-by: joe petrowski <25483142+joepetrowski@users.noreply.github.com>
Co-authored-by: joe petrowski <25483142+joepetrowski@users.noreply.github.com>
Comment on lines +218 to +222
pub struct ParentOrParentsPlurality;
impl Contains<Location> for ParentOrParentsPlurality {
fn contains(location: &Location) -> bool {
matches!(location.unpack(), (1, []) | (1, [Plurality { .. }]))
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We have e2e tests for cases: OpenGov from Relay Chain executes some permissioned operation on AssetHub. OpenGov inits Fellowship on Collectives, Fellowship whitelists the call, Fellowship executes some permissioned operation on AssetHub. Not sure where these tests located now, but we had them in polkadot-sdk before.

Comment on lines +204 to +208
pub struct ParentOrParentsPlurality;
impl Contains<Location> for ParentOrParentsPlurality {
fn contains(location: &Location) -> bool {
matches!(location.unpack(), (1, []) | (1, [Plurality { .. }]))
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Replied in the comment above. But I would review the tests we have now, and make sure we cover all important cases. I will do the review for OpenGov and Fellowship

bkontur and others added 2 commits March 4, 2024 13:26
* feat(pallet-xcm): bump version

* feat(claim_assets): add a test for claim_assets in all runtimes

* fix(pallet_xcm::benchmarking): add missing get_asset function to runtimes

* fix: fmt

* chore: move claim_asset tests to their own file under each runtime

* chore(claim_assets): add weights

* feat(claim_assets): more test cases

* fix: fmt
@bkontur
Copy link
Contributor Author

bkontur commented Mar 4, 2024

/merge

@fellowship-merge-bot fellowship-merge-bot bot enabled auto-merge (squash) March 4, 2024 22:29
@fellowship-merge-bot
Copy link
Contributor

Enabled auto-merge in Pull Request

Available commands
  • /merge: Enables auto-merge for Pull Request
  • /merge cancel: Cancels auto-merge for Pull Request
  • /merge help: Shows this menu

For more information see the documentation

@fellowship-merge-bot fellowship-merge-bot bot merged commit ba359a4 into polkadot-fellows:main Mar 5, 2024
29 of 30 checks passed
@bkontur bkontur deleted the bko-bump-to-1.7 branch March 5, 2024 04:30
fellowship-merge-bot bot pushed a commit that referenced this pull request Mar 25, 2024
This is similar to the two chains we deployed to Rococo and Westend,
with a different `PriceAdapter` implementation. This is likely to be the
most contentious addition here and is obviously not final, but a start
until the correct function and parametrisation can be decided upon.
Discussion is welcome.

I have also considered adding a minimum factor in `adapt_price` to
ensure that we never actually hit 0. Due to the price corrections being
purely multiplicative, if the price ever hits zero there is no going
back, which I think should be configurable to avoid. I would be
interested to hear people's thoughts on this. I can also add this
minimum value to the broker pallet upstream (which I think is probably
the best place for that).

Due to the Transact XCMs in the `CoretimeInterface`, there is a need for
hardcoded weights for several relay chain calls. These should be checked
after benchmarking the relay chain. There are hardcoded weights on the
Relay Chain which should also be checked when the Coretime Chain is
benchmarked. XCM emulator tests are being put together to ensure this
happens automatically.

TODO:
- [x] Refine hardcoded relay weights once they are available (likely in
the merge of #187). These calls and the hardcoded weights in the relay
should be checked after benchmarks are rerun before release
- [x] Add proxy pallet
- [x] Define new `OnRevenue` to burn revenue from Coretime
([RFC-010](https://polkadot-fellows.github.io/RFCs/approved/0010-burn-coretime-revenue.html))
WIP:
[here](https://github.com/seadanda/runtimes/tree/donal-coretime-kusama-burn).

---

## Pricing model

The `AdaptPrice` implementation in this PR is designed together with a
guideline configuration to provide conservative limits in the early
sales to avoid high volatility. I have listed the suggested
configuration and outlined the reasoning below.

### tl;dr:
#### Price adapter behaviour:
`leadin_factor_at`: linear with a max of 5x
`adapt_price`: linear with conservative limits, maximum factor not
relevant for our ideal bulk proportion settings, but set to 1.2 as a
conservative limit in case we want to change config without a runtime
upgrade.

#### Start sales parameters:
`initialPrice`: ~5 KSM 
`coreCount`: 3

#### Configuration parameters:
`advanceNotice`: 10 (1 min)
`interludeLength`: 50400 (7 days)
`leadinLength`: 50400 (7 days)
`regionLength`: 5040 (28 days)
`idealBulkProportion`: 1000000000 (100% for the first few sales)
`limitCoresOffered`: 3
`renewalBump`: 30000000 (3% per sale / ~47% annually)
`contributionTimeout`: 5040 (28 days)

---

### Price adapter reasoning
For the price adapter the main goals are to avoid frontrunning, while
also minimising volatility in the early sales. The assumption is that
the price will take time to stabilise, and with large corrections we end
up flip flopping between values far above and far below the true market
price, which will take longer to stabilise the higher those values are
between sales. We assume few cores per sale and take into consideration
the fact that most cores are occupied by a lease.

The decision to go for a higher lead in factor and a very conservative
price adapter are due to the expectation that the early sales will be
set such that all cores are sold. A high lead-in max factor allows
people to buy at the price they feel comfortable, which affects the
market price. This in combination with a conservative price adapter (at
the start with no upward correction) means that the price that people
are willing to pay is used in the next sale when the cores are sold out,
and is adjusted downwards if not all cores are sold. In the case where
no cores are sold a minimum factor of 0.5 is applied to gradually bring
the price down until a price is found, rather than having something
symmetrical which snaps up then can rebound right back down in the
extreme case.

### Start sales reasoning
This is purely for the first sale, after which the price adapter takes
over.
We suggest a starting value between ~4.4 KSM and ~36.8 KSM depending on
the figure we assume for the utilisation. These values represent those
for the average (mean) of 13% and for maximum utilisation of 100%
respectively. We estimate that the true market price lies between these
two values, and so we suggest starting from the average (mean) with a
lead-in factor of 5.
This lets the market decide between guardrails of ~13%-65% utilisation
in the first sale. We then round up the suggestion to 5 KSM to
acknowledge the uncertainty around this value.

This is based on the model set out in [Erin's
proposal](https://forum.polkadot.network/t/initial-coretime-pricing/5187/7)
and [(also Erin's) Kusama tweak to reflect
utilisation](https://forum.polkadot.network/t/initial-coretime-pricing/5187/24).

### Configuration reasoning
```rust
/// The number of Relay-chain blocks in advance which scheduling should be fixed and the
/// `Coretime::assign` API used to inform the Relay-chain.
pub advance_notice: RelayBlockNumber,
```
`advanceNotice`: 10 (1 min) - We require ~10 blocks to send notifyCore
messages for 100 cores if the max is 10 XCM messages per block. Note 6s
blocks

```rust
/// The length in blocks of the Interlude Period for forthcoming sales.
pub interlude_length: BlockNumber,
```
`interludeLength`: 50400 (7 days) - gives people ample time to renew
without reducing sale too much. Note 12s blocks

```rust
/// The length in blocks of the Leadin Period for forthcoming sales.
pub leadin_length: BlockNumber,
```
`leadinLength`: 50400 (7 days) - price drops for 7 days before becoming
fixed, then is fixed for just 14 days before we hit the next region.
Note 12s blocks

```rust
/// The length in timeslices of Regions which are up for sale in forthcoming sales.
pub region_length: Timeslice,
```
`regionLength`: 5040 (28 days) - this is in timeslices, which are set to
80 blocks in the coretime-kusama runtime. These blocks also are relay
chain blocks (6s)

```rust
/// The proportion of cores available for sale which should be sold in order for the price
/// to remain the same in the next sale.
pub ideal_bulk_proportion: Perbill,
```
`idealBulkProportion`: 1000000000 or 400000000 - 100% paired with a
small core offering or 40-60% when more people are off their leases and
there's a more fluid market with a higher core limit later on - this
needs to be modeled before being updated after the first few sales

```rust
/// An artificial limit to the number of cores which are allowed to be sold. If `Some` then
/// no more cores will be sold than this.
pub limit_cores_offered: Option<CoreIndex>,
```
`limitCoresOffered`: 3 initially - This should replace the auctions that
were cancelled (coupled with 100% target means we don't cause price
spikes)

```rust
/// The amount by which the renewal price increases each sale period.
pub renewal_bump: Perbill,
```
`renewalBump`: 30000000 - 3% every 28 days equates to around 47%
annually

```rust
/// The duration by which rewards for contributions to the InstaPool must be collected.
contribution_timeout: Timeslice,
```
`contributionTimeout`: 5040 (28 days) - not much thought behind this
one, 1 region to collect seems like a good starting point. This is not
relevant until the Coretime Credits are implemented on the relay chain.

---------

Co-authored-by: joe petrowski <25483142+joepetrowski@users.noreply.github.com>
Co-authored-by: Bastian Köcher <git@kchr.de>
Co-authored-by: Dmitry Sinyavin <dmitry.sinyavin@parity.io>
Co-authored-by: joepetrowski <joe@parity.io>
Co-authored-by: Branislav Kontur <bkontur@gmail.com>
Co-authored-by: Bastian Köcher <info@kchr.de>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

8 participants