Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Set receive_window per quic connection #26936

Merged
merged 9 commits into from
Aug 9, 2022

Conversation

lijunwangs
Copy link
Contributor

Problem

Differentiate the staked and non-staked connections and set different receive_window. Testing has show that with everything equal, by tweaking the receive window alone, the receive_window is impacting the chunks_received only below about 10 * PACKET_DATA_SIZE. Beyond that -- it is not making much difference.

Summary of Changes

This change sets the receive_window for non-staked node to 1 * PACKET_DATA_SIZE, and maps the staked nodes's connection's receive_window between 1.2 * PACKET_DATA_SIZE to 10 * PACKET_DATA_SIZE based on the stakes.

The changes is based on Quinn library change to support per connection receive_window tweak at the server side. quinn-rs/quinn#1393

Fixes #

@lijunwangs
Copy link
Contributor Author

sdk/src/quic.rs Outdated Show resolved Hide resolved
Copy link
Contributor

@pgarg66 pgarg66 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good except one review comment.

match peer_type {
ConnectionPeerType::Unstaked => {
VarInt::from_u64((PACKET_DATA_SIZE as u64 * QUIC_UNSTAKED_RECEIVE_WINDOW_RATIO) as u64)
.unwrap()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should handle this unwrap(). With current limits its guaranteed to succeed, but maybe it is not future proof.
How about if this function returns a Result<VarInt,..> and the caller sets the receive window only if the result is Ok?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

Copy link
Contributor

@ryleung-solana ryleung-solana left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm kind of confused about the intent of this... if I'm understanding correctly, this sets the total amount of data the connection can receive. If the sender sends more than that, the server will close the connection. What if the the sender wants to send more (e.g. it has a big batch of transactions to send)? It seems kind of inconvenient to have your connection repeatedly dropped in the middle of a batch send, only to be forced to reconnect.

@lijunwangs
Copy link
Contributor Author

lijunwangs commented Aug 8, 2022

I'm kind of confused about the intent of this... if I'm understanding correctly, this sets the total amount of data the connection can receive. If the sender sends more than that, the server will close the connection. What if the the sender wants to send more (e.g. it has a big batch of transactions to send)? It seems kind of inconvenient to have your connection repeatedly dropped in the middle of a batch send, only to be forced to reconnect.

receive_window != max_data. receive_window is Quinn's implementation to achieve the flow control described by QUIC's max_data mechanism. As data is read, max_data is credited and increased and communicated back to the sender periodically. If the sender is exceeding the data limit, it becomes blocked and signals that with DATA_BLOCKED frame. Quinn implementation will ensure that not to exceed the MAX_DATA from the client side. If the sender maliciously ignore MAX_DATA, the connection will be disconnected by the server. In my tests with bench-tps I have not seen connection being remade.

@lijunwangs lijunwangs merged commit a69470f into solana-labs:master Aug 9, 2022
Copy link
Member

@sakridge sakridge left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm

Comment on lines +24 to +25
quinn = {git = "https://github.com/quinn-rs/quinn.git", branch = "0.8.x", commit = "37c19743cc881cf71369946d572849d5d2ffc3fd"}
quinn-proto = {git = "https://github.com/quinn-rs/quinn.git", branch = "0.8.x", commit = "37c19743cc881cf71369946d572849d5d2ffc3fd"}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@lijunwangs This does not seem correct. I am seeing warnings:

warning: solana/streamer/Cargo.toml: unused manifest key: dependencies.quinn-proto.commit                                                           
warning: solana/streamer/Cargo.toml: unused manifest key: dependencies.quinn.commit 

You need to use rev instead:
https://doc.rust-lang.org/cargo/guide/cargo-toml-vs-cargo-lock.html#cargotoml-vs-cargolock
https://doc.rust-lang.org/cargo/reference/specifying-dependencies.html#specifying-dependencies-from-git-repositories

as in:
https://github.com/solana-labs/solana/blob/773a4dd4d/Cargo.toml#L100

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks. @behzadnouri -- will take care of it.

xiangzhu70 pushed a commit to xiangzhu70/solana that referenced this pull request Aug 17, 2022
This change sets the receive_window for non-staked node to 1 * PACKET_DATA_SIZE, and maps the staked nodes's connection's receive_window between 1.2 * PACKET_DATA_SIZE to 10 * PACKET_DATA_SIZE based on the stakes.

The changes is based on Quinn library change to support per connection receive_window tweak at the server side. quinn-rs/quinn#1393
lijunwangs added a commit to lijunwangs/solana that referenced this pull request Oct 26, 2022
This change sets the receive_window for non-staked node to 1 * PACKET_DATA_SIZE, and maps the staked nodes's connection's receive_window between 1.2 * PACKET_DATA_SIZE to 10 * PACKET_DATA_SIZE based on the stakes.

The changes is based on Quinn library change to support per connection receive_window tweak at the server side. quinn-rs/quinn#1393
Comment on lines +39 to +40
let mut max_stake: u64 = 0;
let mut min_stake: u64 = u64::MAX;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is computing {min,max}_stake but never actually updating those fields in shared_staked_nodes.
This is patched in #31082

Until #31082 is merged, these two fields are always zero, so any benchmarks or tests done so far have been wrong.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry, I do not understand, the min max is updated in try_refresh_stake_maps line 76-82. What am I missing?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Got it. Thanks! The results of attached PDF of comparing different receive_window size was done via hard coded value in the code when I set the receive_window per connection -- as we did not have a mechanism to simulate the stakes in the bench-tps tool.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants