-
Notifications
You must be signed in to change notification settings - Fork 4.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Set receive_window per quic connection #26936
Set receive_window per quic connection #26936
Conversation
Co-authored-by: Pankaj Garg <pankaj@solana.com>
242a4fd
to
3eca2c1
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good except one review comment.
streamer/src/nonblocking/quic.rs
Outdated
match peer_type { | ||
ConnectionPeerType::Unstaked => { | ||
VarInt::from_u64((PACKET_DATA_SIZE as u64 * QUIC_UNSTAKED_RECEIVE_WINDOW_RATIO) as u64) | ||
.unwrap() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we should handle this unwrap(). With current limits its guaranteed to succeed, but maybe it is not future proof.
How about if this function returns a Result<VarInt,..>
and the caller sets the receive window only if the result is Ok
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm kind of confused about the intent of this... if I'm understanding correctly, this sets the total amount of data the connection can receive. If the sender sends more than that, the server will close the connection. What if the the sender wants to send more (e.g. it has a big batch of transactions to send)? It seems kind of inconvenient to have your connection repeatedly dropped in the middle of a batch send, only to be forced to reconnect.
receive_window != max_data. receive_window is Quinn's implementation to achieve the flow control described by QUIC's max_data mechanism. As data is read, max_data is credited and increased and communicated back to the sender periodically. If the sender is exceeding the data limit, it becomes blocked and signals that with DATA_BLOCKED frame. Quinn implementation will ensure that not to exceed the MAX_DATA from the client side. If the sender maliciously ignore MAX_DATA, the connection will be disconnected by the server. In my tests with bench-tps I have not seen connection being remade. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm
quinn = {git = "https://github.com/quinn-rs/quinn.git", branch = "0.8.x", commit = "37c19743cc881cf71369946d572849d5d2ffc3fd"} | ||
quinn-proto = {git = "https://github.com/quinn-rs/quinn.git", branch = "0.8.x", commit = "37c19743cc881cf71369946d572849d5d2ffc3fd"} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@lijunwangs This does not seem correct. I am seeing warnings:
warning: solana/streamer/Cargo.toml: unused manifest key: dependencies.quinn-proto.commit
warning: solana/streamer/Cargo.toml: unused manifest key: dependencies.quinn.commit
You need to use rev
instead:
https://doc.rust-lang.org/cargo/guide/cargo-toml-vs-cargo-lock.html#cargotoml-vs-cargolock
https://doc.rust-lang.org/cargo/reference/specifying-dependencies.html#specifying-dependencies-from-git-repositories
as in:
https://github.com/solana-labs/solana/blob/773a4dd4d/Cargo.toml#L100
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks. @behzadnouri -- will take care of it.
This change sets the receive_window for non-staked node to 1 * PACKET_DATA_SIZE, and maps the staked nodes's connection's receive_window between 1.2 * PACKET_DATA_SIZE to 10 * PACKET_DATA_SIZE based on the stakes. The changes is based on Quinn library change to support per connection receive_window tweak at the server side. quinn-rs/quinn#1393
This change sets the receive_window for non-staked node to 1 * PACKET_DATA_SIZE, and maps the staked nodes's connection's receive_window between 1.2 * PACKET_DATA_SIZE to 10 * PACKET_DATA_SIZE based on the stakes. The changes is based on Quinn library change to support per connection receive_window tweak at the server side. quinn-rs/quinn#1393
let mut max_stake: u64 = 0; | ||
let mut min_stake: u64 = u64::MAX; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry, I do not understand, the min max is updated in try_refresh_stake_maps line 76-82. What am I missing?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The respective fields in shared_staked_nodes
are not updated:
https://github.com/solana-labs/solana/blob/4564bcdc1/core/src/staked_nodes_updater_service.rs#L54-L57
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These fields:
https://github.com/solana-labs/solana/blob/03abaf76d/streamer/src/streamer.rs#L31-L32
are always zero.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Got it. Thanks! The results of attached PDF of comparing different receive_window size was done via hard coded value in the code when I set the receive_window per connection -- as we did not have a mechanism to simulate the stakes in the bench-tps tool.
Problem
Differentiate the staked and non-staked connections and set different receive_window. Testing has show that with everything equal, by tweaking the receive window alone, the receive_window is impacting the chunks_received only below about 10 * PACKET_DATA_SIZE. Beyond that -- it is not making much difference.
Summary of Changes
This change sets the receive_window for non-staked node to 1 * PACKET_DATA_SIZE, and maps the staked nodes's connection's receive_window between 1.2 * PACKET_DATA_SIZE to 10 * PACKET_DATA_SIZE based on the stakes.
The changes is based on Quinn library change to support per connection receive_window tweak at the server side. quinn-rs/quinn#1393
Fixes #