-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
quic: batch packet testing #36061
quic: batch packet testing #36061
Conversation
267ced6
to
ce3604a
Compare
Signed-off-by: Alyssa Wilk <alyssar@chromium.org>
auto response = codec_client_->makeHeaderOnlyRequest(default_request_headers_); | ||
waitForNextUpstreamRequest(); | ||
upstream_request_->encodeHeaders(default_response_headers_, false); | ||
// Send more than the 32 packets that will be handled in one read pass. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How to guarantee these 35 packets arrive as a batch in client connection's socket instead of in seperate event loops? Or is the expectation of this test that it shouldn't be flaky when they arrive in one batch?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the best we can do is the block we have. So we send them all, deadlock the backend, then do a read. We don't start doing the read until we confirm the backend is done with its send. Also I did some backup logging to verify we got the reregister event just to be 100% sure we were sending enough packets in the common case.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you comment that this is a best effort and because of the dead lock you added, more likely these packets will all arrive before the client resumes the event loop?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good idea. I had similar questions reading this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah I'm not sure if it's best effort or not - given loopback I suspect it's guaranteed but I think none of us know enough about kernel udp implementation to be sure so I won't claim that in comments.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, modulo two small comments.
envoy::extensions::transport_sockets::tls::v3::UpstreamTlsContext& tls_context); | ||
envoy::extensions::transport_sockets::tls::v3::UpstreamTlsContext& tls_context, | ||
// By default, clients connect to Envoy. Allow configuring to connect to upstreams. | ||
bool connect_to_upstream = false); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: Here the parameter is named connect_to_upstream
but below it's connect_to_fake_upstreams
. It would be good to unify them.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
auto response = codec_client_->makeHeaderOnlyRequest(default_request_headers_); | ||
waitForNextUpstreamRequest(); | ||
upstream_request_->encodeHeaders(default_response_headers_, false); | ||
// Send more than the 32 packets that will be handled in one read pass. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good idea. I had similar questions reading this.
Adding test infra to allow a quic test client to talk to the quic test upstream
Adding an e2e test of reading when more than 32 packets are in the kernel buffer
Risk Level: low
Testing: yes
Docs Changes: n/a
Release Notes: n/a