Skip to content
This repository has been archived by the owner on Nov 15, 2023. It is now read-only.

rpc: Use the blocks pinning API for chainHead methods #13233

Merged
merged 48 commits into from
May 2, 2023

Conversation

lexnv
Copy link
Contributor

@lexnv lexnv commented Jan 24, 2023

This PR leverages the new pinning API introduced by #13157.

  • Use the API to keep blocks pinned
  • Keep a global track of pinned blocks across subscriptions (backend does not propagate errors, nor does the state-db impose limits on pinned headers)
    • Blocks are reference counted and pinned in the background only once
  • When the global limit (512 for chainHead) is reached:
  • Keep track of a block until:
    • 2 pin calls are made (one from the BestBlock and one from the Finalized branch)
    • and the user calls unpin.
  • RAII pattern for a pinned block to ensure the block is not unpinned during method calls (ie storage, header query)

To ensure proper testing the in-memory Backend is extended to describe the number of active pinned references for a given block. An in-memory Backend instance is used for testing the chainHead, therefore the children method of the in-memory Backend is extended to return an empty list.

CC: @paritytech/subxt-team

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>
Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>
Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>
Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>
@lexnv lexnv self-assigned this Jan 24, 2023
@github-actions github-actions bot added the A0-please_review Pull request needs code review. label Jan 24, 2023
@lexnv lexnv added B0-silent Changes should not be mentioned in any release notes C1-low PR touches the given topic and has a low impact on builders. labels Jan 24, 2023
Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>
Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>
…s on drop

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>
Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>
@@ -60,15 +60,15 @@ use sp_runtime::{
use std::{marker::PhantomData, sync::Arc};

/// An API for chain head RPC calls.
pub struct ChainHead<BE, Block: BlockT, Client> {
pub struct ChainHead<BE: Backend<Block> + 'static, Block: BlockT, Client> {
Copy link
Contributor

@jsdw jsdw Jan 25, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just a nit; nowadays I'm leaning towards not putting any trait bounds on generic params in structs unless they are needed for the definition of the struct (though I've been back and forth on this myself), so I guess I'd lean towards removing the bounds if possible (but curious on what others think!)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yep, I do like that approach. One nice thing about not having bounds on the structure is that we do not contaminate every structure containing that type.

For soundness reasons I presume, the compiler requires the bounds for the Drop impl to also be present on the struct.
The SubscriptionHandle implements Drop that needs access to the unpin_block method defined by Backend<Block> trait.
And unfortunately, this then leaks into SubscriptionManagement and into ChainHead itself.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for this ; TIL Drop impls with trait bounds need matching ones on the struct :)

@lexnv lexnv requested a review from skunert January 25, 2023 14:43
Copy link
Member

@bkchr bkchr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good in general, just some small things that need to be improved!

client/rpc-spec-v2/src/chain_head/tests.rs Outdated Show resolved Hide resolved
client/rpc-spec-v2/src/chain_head/tests.rs Outdated Show resolved Hide resolved
client/rpc-spec-v2/src/chain_head/tests.rs Outdated Show resolved Hide resolved
client/rpc-spec-v2/src/chain_head/tests.rs Outdated Show resolved Hide resolved
client/rpc-spec-v2/src/chain_head/subscription.rs Outdated Show resolved Hide resolved
client/rpc-spec-v2/src/chain_head/subscription.rs Outdated Show resolved Hide resolved
@@ -94,13 +104,17 @@ impl<Block: BlockT> SubscriptionHandle<Block> {

if inner.blocks.len() == inner.max_pinned_blocks {
// We have reached the limit. However, the block can be already inserted.

if inner.blocks.contains(&hash) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here is a logic error? If we try to pin the same block twice, the first unpin will unpin it in the backend (assuming it is only pinned by this subscription). Is this documented in the spec that when you receive the same block as best and some seconds later as finalized (without having called unpin in between), you only need to call unpin once?

Copy link
Contributor Author

@lexnv lexnv Jan 27, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice catch! I couldn't find something super specific in the RPC spec regarding this, tho I was expecting users to call unpin only once (when they are no longer interested in using the block).

From the spec:

Note: A JSON-RPC client should call chainHead_unstable_unpin only after it is sure to no longer be interested in a certain block. This typically happens after the block has been finalized or pruned. There is no requirement to call chainHead_unstable_unpin as quickly as possible.

I was also thinking along the lines: the newBlock event is where the block is declared and pinned; and bestBlockChanged and finalized are updates about what happens with that block.

I think we could update the spec documentation for the unpin method to state that only block hashes reported by the newBlock event are supposed to be unpinned.

What do you think about it?

Copy link
Contributor

@jsdw jsdw Jan 27, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was also thinking along the lines: the newBlock event is where the block is declared and pinned; and bestBlockChanged and finalized are updates about what happens with that block.

It sounds reasonable to me that:

  • a block is pinned just prior to a client being given a newBlock event,
  • later we might see best/finalised events emitted re that block (and no pinning happens at this point),
  • at any point after the newBlock, whenever a user would like to unpin that block hash they will call "unpin" once against the hash to unpin it.

But, if a newBlock is unpinned quickly and then a best/finalized event is emitted for it.. should it need unpinning again? I think offhand I'd go with "it's been unpinned so it stays unpinned, and those events are just notifications about the block you've already declared you don't care about any more".

But that would make it annoying for people to ignore all but best/finalized blocks! Because the simplest logic to do so would be to call "unpin" on all "newBlock" events and not on any best/finalized block events.

"unpin" could potentially take both a block hash and a type (new/best/finalized) so that you can opt-out of all block events you don't care about but still keep pinned the ones you do?

Or, a best/finalized notificiation could pin the block again, and "unpin" just takes the hash, but that runs the risk of race conditions (I "unpin" a newBlock but by the time the unpin request hits the node ,a bestBlock for that hash is emitted).

So two options then are:

  • Blocks are only pinned at "newBlock". Unpin will unpin, and best/finalized events may exist for blocks that have already been unpinned.
  • new/best/finalized blocks are treated as independent. "unpin" takes a hash and type to unpin exactly one of those.

Maybe there's a simpler way around this?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was also thinking along the lines: the newBlock event is where the block is declared and pinned; and bestBlockChanged and finalized are updates about what happens with that block.

I think we should pin on the first event we see (a client can only see bestBlock/finalized without newblock by having connected after the block was imported) and then unpin should be called when they are not interested anymore. We should update the spec to mention this behavior.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yep, we could update the spec to reflect something along the lines:
"Any block hash reported by chainHead's events is guaranteed to be pinned in memory and it requires a single chainHead_unstable_unpin call to unpin the block."

The spec states currently:

The current finalized block reported in the initialized event, and each subsequent block reported with a newBlock event, is automatically considered by the JSON-RPC server as pinned.


I think we should pin on the first event we see

Yep, that's along the lines of how we pin internally for this chainhead subscription.

a client can only see bestBlock/finalized without newblock by having connected after the block was imported

Before using the subscriptions (best block and finalized) we are:

  • generating finalized event (and pin the finalized block)
  • generate newBlock event for each children of the finalized recursively (and pin the block)

I might be wrong, but I think its entirely possible that we can miss blocks (warp sync etc) and such we always pin the block on the first event that we get from the subscriptions.
In case of errors or jumps, the chainHead should produce the Stop events.

Let me know if that is correct, I ll take an action to create a PR to clarify a bit the spec, thanks!

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Another way would be to allow users to call unpin only for hashes reported by the Finalized event: paritytech/json-rpc-interface-spec#32. Let me know if this makes a bit more sense, thanks!

Copy link
Contributor

@jsdw jsdw Feb 8, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

and then unpin should be called when they are not interested anymore.

What if they call unpin after receiving eg a newBlock, and then the server sends a finalized with the same hash? I assume that the block is still unpinned even though the client has been told about it again.

If I was only interested in finalized blocks, and did not want to have any non-finalized blocks pinned any longer than necessary, what would I do?

I gather from chatting with Alex that that what I'd do is hold off on calling "unpin" on anything until I see a finalized notification (which also lists all blocks that are "pruned"), and then I could use that list to unpin all blocks that are listed as "pruned" now that I know they aren't important any more. (This relies on that list of pruned blocks really just being a list of all blocks not in the finalized chain, since if they are pinned they wouldn't be pruned). Does that make sense?

Another way would be to allow users to call unpin only for hashes reported by the Finalized event: paritytech/json-rpc-interface-spec#32. Let me know if this makes a bit more sense, thanks!

I think github is being slow; this message popped up after I wrote mine :) Also seems like a very valid approach so long as the finalized event def lists all of the blocks produced since the last finalized event that aren't on the finalized chain (so we know what we can safely unpin now). A nice thing about this is that the client is never sent a hash to a block that's already been unpinned (since they can't unpin until they won't hear any more about the block anyway!).

lexnv and others added 2 commits January 27, 2023 15:10
Co-authored-by: Bastian Köcher <git@kchr.de>
Co-authored-by: Bastian Köcher <git@kchr.de>
@skunert
Copy link
Contributor

skunert commented Jan 27, 2023

The block limit is currently set to 4096 here:

const MAX_PINNED_BLOCKS: usize = 4096;

The limit for the pinned block cache in the backend however is 1024 (arbitrarily chosen):
const PINNING_CACHE_SIZE: usize = 1024;

We should adjust these to they match, preferably keeping the backend one larger than the RPC limit, since it needs to keep additional headroom for notification pinning.

Also I would like to discuss the impact on node memory:

  • RPC API has a limit of pinned blocks per subscription
  • BlockchainDB has a limit of overall pinned blocks for justification & body
  • Afaik State-DB does currently not have a hard limit of pinned blocks
  • So theoretically multiple clients could try to keep their maximum of allowed block pins in non-overlapping windows which would increase the nodes memory usage. At some point, BlockchainDB would evict LRU entries, but StateDB would not. Also when the limit is reached in BlockchainDB, RPC layer would think that a block is pinned when it actually is not. We should probably implement some feedback whether pinning in the backend was sucessful.
  • Maybe we should also globally keep track of pinned blocks in the RPC API
  • We could also implement some generous limit in the State-DB for the pinning.

What do you guys think?

@lexnv
Copy link
Contributor Author

lexnv commented Feb 7, 2023

The chainhead limit (4k blocks) is also arbitrarily chosen.

From the initial "pinning in database" from #12497, we internally picked a limit of delaying the pruning by 32 blocks; considering the production of blocks at 6 seconds that would give users roughly 3 minutes to execute their use-case using the chainHead before considering the archive methods.

I like the idea of keeping in sync these values, maybe even a bit more restrictive for the chainHead.
For a more informed decision about the hard limit (pinning cache in the backend), I would try to rerun the memory profiling from #12497 and see how much memory that would eat up (or come up with a sensitive approximation for "justification & body" size).

To allow fairness for multiple subscriptions, in the RPC layer:

  • add a hard limit of "1024" pinned blocks, tracking which subscription is active for those blocks
  • add a soft limit of "64" blocks (approx 6 minutes)
    • inform the chainHead spec about this
    • inform the chainHead that a user is encouraged to call unpin only once when they receive the finalized event (extracted from the above comment)

When a user first connects, the chainHead checks that we have at least "64" free blocks.
If the hard limit is hit for pinning, drop a subscription that exceeded the soft limit and check again.
Ideally, all users will keep a maximum of 64 latest blocks pinned; and that would reflect in approx 64 blocks used out of the hard limit.
As a next step, we could try to punish subscriptions that are keeping older blocks around: when we need space we'd drop a subscription that exceeds the soft limit and has the oldest blocks (ie have a scoring value based on blocks pinned: subscription_score = Sum(each block number): we d drop the subscription with the lowest value first).

RPC layer would think that a block is pinned when it actually is not

Ideally, we'd need to submit the event::Stop on the subscription when we can't hold the pinning guarantees. I also think that we could send the stop "lazily" when the user tries to do something with that block and we realize that's nowhere to be found (we do this currently for headers because we had no true pinning IIRC).

I'm not quite sure what's the best solution to implement considering the alternatives. @bkchr what do you think?

@lexnv
Copy link
Contributor Author

lexnv commented Feb 7, 2023

bot rebase

@paritytech-processbot
Copy link

Rebased

@skunert
Copy link
Contributor

skunert commented Feb 10, 2023

This makes a lot of sense to me. Sounds like a good compromise between resource usage and user convenience. As long as users stay within their 64 (or whatever limit we choose), they are pretty safe. Outside of this, there is a chance of getting thrown out.

inform the chainHead that a user is encouraged to call unpin only once when they receive the finalized event (extracted from the above comment)

I am not sure what you mean by this? Why does it make a difference for us when a user calls unpin (as long as limits are met)?

(we do this currently for headers because we had no true pinning IIRC)

There is no pinning for headers because we never prune them, I think we keep them around forever.

@lexnv
Copy link
Contributor Author

lexnv commented Feb 14, 2023

I am not sure what you mean by this? Why does it make a difference for us when a user calls unpin (as long as limits are met)?

Yep, indeed this doesn't make a difference. For context, I was referring to the paritytech/json-rpc-interface-spec#32 possible race, and since we could handle that inside substrate it shouldn't matter in the end :)

@lexnv
Copy link
Contributor Author

lexnv commented Feb 14, 2023

bot rebase

@paritytech-processbot
Copy link

Rebased

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>
client/api/src/in_mem.rs Outdated Show resolved Hide resolved
Comment on lines 169 to 176
/// This implementation assumes:
/// - most of the time subscriptions keep a few blocks of the chain's head pinned
/// - iteration through the blocks happens only when the hard limit is exceeded.
///
/// Considering the assumption, iterating (in the unlike case) the hashmap O(N) is
/// more time efficient and code friendly than paying for:
/// - extra space: an extra BTreeMap<Instant, Hash> to older hashes by oldest insertion
/// - extra time: O(log(N)) for insert/remove/find each `pin` block time per subscriptions
Copy link
Contributor

@jsdw jsdw Apr 21, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess we are relying solely on users to unpin blocks properly though if we want to uphold these assumptions?

Or will I, as a user, hit the "stop" condition of "too many pinned blocks", which would lead to the subscription closing, before this needs to worry about the hard limit being hit and iteration needed?

(basically I'm wondering if there's a case where users can cause lots of iteration over this if they forget somehting or are in some way malicious? Also wonder how big it can get; maybe that'll become clear as I review)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The pinning is indeed in the user's hands, looking at the code again it seems that the iteration is triggered when we exceed the global limit of how many blocks we can pin (across all subscriptions).
When the limit is exceeded we'll iterate over all blocks of all subscriptions.
That can iterate over at most 512 blocks of each subscription (in the malicious case / in the case where all users forget to unpin)

pub fn contains_block(&self, hash: &Block::Hash) -> bool {
let inner = self.inner.read();
inner.blocks.contains(hash)
fn global_unpin_block(&mut self, hash: Block::Hash) {
Copy link
Contributor

@jsdw jsdw Apr 21, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Something I find a little hard to follow is that none of these fns are pub etc for a bunch of these things, and so it's hard to know which are used by the "parent" interface and which are private and only called from other fns on that type.

I wonder whether moving some of this stuff into separate files/modules would help make the inner and outer interfaces super obvious so that it's easier to see what can and can't be called by the other types?

Copy link
Contributor

@jsdw jsdw left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks great! Just a couple of small comments/suggestions for clarity but nothing super important (I resolved a couple of my own comments as I went through and understood it better)

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>
Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>
client/rpc-spec-v2/src/chain_head/chain_head.rs Outdated Show resolved Hide resolved
client/rpc-spec-v2/src/chain_head/subscription.rs Outdated Show resolved Hide resolved
return Err(SubscriptionManagementError::BlockHashAbsent)
}

BlockGuard::new(hash, sub.runtime_updates, self.backend.clone())
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like lock_block completely by-pass the self pinned counter mechanisms.
In the end is not influenced by the global_max_pinned_blocks and co.
Is this intended? Should't blocks pinned via this mechanism follow the same policy as the ones pinned using direct pin_block and unpin_block?

Nit: not 100% sure lock_block is the proper name for a pin_block with automatic unpin when the BlockGuard goes out of context. But I can't even suggest anything better 😃

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yep, the lock_block function bypasses the chainHead pinning counter.

The pin_block reference counts the block once per subscription and will increase the counter only on the first call. The other calls are advancing the state machine of the block.

We probably could use global_pin_block and global_unpin_block, however, we'd need a reference to the SubscriptionManagement or add an extra lock and pass in a reference to SubscriptionsInner.

That felt more involved and I've relied on the backend's pinning reference count for this

client/rpc-spec-v2/src/chain_head/chain_head.rs Outdated Show resolved Hide resolved
client/rpc-spec-v2/src/chain_head/chain_head.rs Outdated Show resolved Hide resolved
client/service/src/builder.rs Show resolved Hide resolved
client/service/src/builder.rs Outdated Show resolved Hide resolved
.subs
.iter_mut()
.filter_map(|(sub_id, sub)| {
let sub_time = sub.find_oldest_block_timestamp();
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we have to choose which subscription to remove... shouldn't we have to find the newest block timestamp for a subscription? What I mean is if I have several subscriptions is not the most recent one that should be considered? Otherwise I risk to lower my pinning time for a new request just because an old one I previously made

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We had this discussion on the RPC spec (paritytech/json-rpc-interface-spec#33 (comment)) regarding how to terminate subscriptions when a "soft"/"hard" limit is reached.
As a conclusion, we picked ~ 1 minute of blocks after which the subscription could be terminated.

If the user does not unpin blocks fast enough (1 block should not be pinned more than 1 minute) the subscription will be subject to termination.

Here I'm searching for the oldest block of a subscription, if that timestamp is older than 1 minute we can terminate that specific subscription.

I believe that also provides an incentive for the users to operate things at the head of chain, and fallback to archive methods or the old RPC when they want to fetch historical data.

Otherwise I risk to lower my pinning time for a new request just because an old one I previously made

IIUC, if a subscription has 2 blocks: 1 block that is older than 1 minute and 1 newly added block, we risk to lower the pinning time for the second block, because the first one makes the subscription subject to termination.
Let me know if I got this right, thanks! 🙏

Copy link
Member

@davxy davxy Apr 26, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IIUC, if a subscription has 2 blocks: 1 block that is older than 1 minute and 1 newly added block, we risk to lower the pinning time for the second block, because the first one makes the subscription subject to termination.

Yeah I was referring to this. But if it is documented and by design then fair enough

client/rpc-spec-v2/src/chain_head/subscription.rs Outdated Show resolved Hide resolved
client/rpc-spec-v2/src/chain_head/subscription.rs Outdated Show resolved Hide resolved
lexnv and others added 11 commits April 25, 2023 10:38
Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>
Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>
Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>
Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>
Co-authored-by: Davide Galassi <davxy@datawok.net>
Co-authored-by: Davide Galassi <davxy@datawok.net>
Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>
Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>
Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>
Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>
@lexnv lexnv merged commit cb5be45 into master May 2, 2023
4 checks passed
@lexnv lexnv deleted the lexnv/chainhead_pinning branch May 2, 2023 15:00
gpestana pushed a commit that referenced this pull request May 4, 2023
* rpc/chain_head: Add backend to subscription management

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* rpc/chain_head: Pin blocks internally and adjust testing

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* client/in_mem: Reference for the number of pinned blocks

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* rpc/tests: Check in-memory references to pinned blocks

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* rpc/chain_head: Fix clippy

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* rpc/chain_head: Remove unused comment

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* rpc/chain_head: Place subscription handle under `Arc` and unpin blocks on drop

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* rpc/tests: Check all pinned blocks are unpinned on drop

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* Apply suggestions from code review

Co-authored-by: Bastian Köcher <git@kchr.de>

* Update client/rpc-spec-v2/src/chain_head/subscription.rs

Co-authored-by: Bastian Köcher <git@kchr.de>

* rpc/tests: Retry fetching the pinned references for CI correctness

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* client/service: Use 512 as maximum number of pinned blocks

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* chain_head: Fix merging conflicts

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* rpc/chain_head: Adjust subscriptions to use pinning API

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* rpc/chain_head/tests: Test subscription management

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* rpc/chain_head: Adjust chain_head follow to the new API

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* rpc/chain_head: Adjust chain_head.rs to the new API

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* rpc/chain_head/tests: Adjust test.rs to the new API

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* client/builder: Use new chainHead API

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* rpc/chain_head: Fix documentation

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* rpc/chain_head: Fix clippy

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* client/in_mem: ChainHead no longer uses `in_mem::children`

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* Update client/rpc-spec-v2/src/chain_head/subscription.rs

Co-authored-by: Sebastian Kunert <skunert49@gmail.com>

* Update client/rpc-spec-v2/src/chain_head/subscription.rs

Co-authored-by: Sebastian Kunert <skunert49@gmail.com>

* Update client/rpc-spec-v2/src/chain_head/subscription.rs

Co-authored-by: Sebastian Kunert <skunert49@gmail.com>

* Update client/rpc-spec-v2/src/chain_head/subscription.rs

Co-authored-by: Sebastian Kunert <skunert49@gmail.com>

* chain_head: Add block state machine

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* Address feedback

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* Use new_native_or_wasm_executor

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* chain_head: Remove 'static on Backend

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* chain_head: Add documentation

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* chain_head: Lock blocks before async blocks

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* chain_head_follower: Remove static on backend

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* Update client/service/src/builder.rs

Co-authored-by: Davide Galassi <davxy@datawok.net>

* Update client/service/src/builder.rs

Co-authored-by: Davide Galassi <davxy@datawok.net>

* chain_head: Add BlockHeaderAbsent to the PartialEq impl

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* client: Add better documentation around pinning constants

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* chain_head: Move subscription to dedicated module

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* subscription: Rename global pin / unpin functions

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

---------

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>
Co-authored-by: Bastian Köcher <git@kchr.de>
Co-authored-by: parity-processbot <>
Co-authored-by: Sebastian Kunert <skunert49@gmail.com>
Co-authored-by: Davide Galassi <davxy@datawok.net>
nathanwhit pushed a commit to nathanwhit/substrate that referenced this pull request Jul 19, 2023
* rpc/chain_head: Add backend to subscription management

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* rpc/chain_head: Pin blocks internally and adjust testing

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* client/in_mem: Reference for the number of pinned blocks

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* rpc/tests: Check in-memory references to pinned blocks

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* rpc/chain_head: Fix clippy

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* rpc/chain_head: Remove unused comment

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* rpc/chain_head: Place subscription handle under `Arc` and unpin blocks on drop

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* rpc/tests: Check all pinned blocks are unpinned on drop

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* Apply suggestions from code review

Co-authored-by: Bastian Köcher <git@kchr.de>

* Update client/rpc-spec-v2/src/chain_head/subscription.rs

Co-authored-by: Bastian Köcher <git@kchr.de>

* rpc/tests: Retry fetching the pinned references for CI correctness

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* client/service: Use 512 as maximum number of pinned blocks

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* chain_head: Fix merging conflicts

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* rpc/chain_head: Adjust subscriptions to use pinning API

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* rpc/chain_head/tests: Test subscription management

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* rpc/chain_head: Adjust chain_head follow to the new API

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* rpc/chain_head: Adjust chain_head.rs to the new API

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* rpc/chain_head/tests: Adjust test.rs to the new API

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* client/builder: Use new chainHead API

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* rpc/chain_head: Fix documentation

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* rpc/chain_head: Fix clippy

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* client/in_mem: ChainHead no longer uses `in_mem::children`

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* Update client/rpc-spec-v2/src/chain_head/subscription.rs

Co-authored-by: Sebastian Kunert <skunert49@gmail.com>

* Update client/rpc-spec-v2/src/chain_head/subscription.rs

Co-authored-by: Sebastian Kunert <skunert49@gmail.com>

* Update client/rpc-spec-v2/src/chain_head/subscription.rs

Co-authored-by: Sebastian Kunert <skunert49@gmail.com>

* Update client/rpc-spec-v2/src/chain_head/subscription.rs

Co-authored-by: Sebastian Kunert <skunert49@gmail.com>

* chain_head: Add block state machine

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* Address feedback

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* Use new_native_or_wasm_executor

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* chain_head: Remove 'static on Backend

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* chain_head: Add documentation

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* chain_head: Lock blocks before async blocks

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* chain_head_follower: Remove static on backend

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* Update client/service/src/builder.rs

Co-authored-by: Davide Galassi <davxy@datawok.net>

* Update client/service/src/builder.rs

Co-authored-by: Davide Galassi <davxy@datawok.net>

* chain_head: Add BlockHeaderAbsent to the PartialEq impl

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* client: Add better documentation around pinning constants

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* chain_head: Move subscription to dedicated module

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

* subscription: Rename global pin / unpin functions

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>

---------

Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io>
Co-authored-by: Bastian Köcher <git@kchr.de>
Co-authored-by: parity-processbot <>
Co-authored-by: Sebastian Kunert <skunert49@gmail.com>
Co-authored-by: Davide Galassi <davxy@datawok.net>
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
A0-please_review Pull request needs code review. B0-silent Changes should not be mentioned in any release notes C1-low PR touches the given topic and has a low impact on builders.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants