Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

quic: batch packet testing #36061

Merged
merged 2 commits into from
Sep 16, 2024
Merged

quic: batch packet testing #36061

merged 2 commits into from
Sep 16, 2024

Conversation

alyssawilk
Copy link
Contributor

Adding test infra to allow a quic test client to talk to the quic test upstream
Adding an e2e test of reading when more than 32 packets are in the kernel buffer

Risk Level: low
Testing: yes
Docs Changes: n/a
Release Notes: n/a

Copy link

As a reminder, PRs marked as draft will not be automatically assigned reviewers,
or be handled by maintainer-oncall triage.

Please mark your PR as ready when you want it to be reviewed!

🐱

Caused by: #36061 was opened by alyssawilk.

see: more, trace.

Signed-off-by: Alyssa Wilk <alyssar@chromium.org>
auto response = codec_client_->makeHeaderOnlyRequest(default_request_headers_);
waitForNextUpstreamRequest();
upstream_request_->encodeHeaders(default_response_headers_, false);
// Send more than the 32 packets that will be handled in one read pass.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How to guarantee these 35 packets arrive as a batch in client connection's socket instead of in seperate event loops? Or is the expectation of this test that it shouldn't be flaky when they arrive in one batch?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the best we can do is the block we have. So we send them all, deadlock the backend, then do a read. We don't start doing the read until we confirm the backend is done with its send. Also I did some backup logging to verify we got the reregister event just to be 100% sure we were sending enough packets in the common case.

Copy link
Contributor

@danzh2010 danzh2010 Sep 11, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you comment that this is a best effort and because of the dead lock you added, more likely these packets will all arrive before the client resumes the event loop?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good idea. I had similar questions reading this.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah I'm not sure if it's best effort or not - given loopback I suspect it's guaranteed but I think none of us know enough about kernel udp implementation to be sure so I won't claim that in comments.

Copy link
Contributor

@danzh2010 danzh2010 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Copy link
Contributor

@RyanTheOptimist RyanTheOptimist left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, modulo two small comments.

envoy::extensions::transport_sockets::tls::v3::UpstreamTlsContext& tls_context);
envoy::extensions::transport_sockets::tls::v3::UpstreamTlsContext& tls_context,
// By default, clients connect to Envoy. Allow configuring to connect to upstreams.
bool connect_to_upstream = false);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: Here the parameter is named connect_to_upstream but below it's connect_to_fake_upstreams. It would be good to unify them.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

auto response = codec_client_->makeHeaderOnlyRequest(default_request_headers_);
waitForNextUpstreamRequest();
upstream_request_->encodeHeaders(default_response_headers_, false);
// Send more than the 32 packets that will be handled in one read pass.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good idea. I had similar questions reading this.

Signed-off-by: Alyssa Wilk <alyssar@chromium.org>
@alyssawilk alyssawilk merged commit d6a8361 into envoyproxy:main Sep 16, 2024
39 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants