Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

API for H2 Buffer Limit config #1934

Closed
wants to merge 10 commits into from
Closed

API for H2 Buffer Limit config #1934

wants to merge 10 commits into from

Conversation

junr03
Copy link
Member

@junr03 junr03 commented Nov 10, 2021

Description: this PR adds an EngineBuilder API to configure the allowed buffer limit for H2 streams. Under the hood this configures Envoy's H2 protocol's initial_stream_window_size.
Risk Level: med new opt-in API, but the default moves the buffer limit from 65kb to 10mb.
Testing: added Engine Builder tests.
Docs Changes: added

Jose Nino added 8 commits November 9, 2021 17:15
Signed-off-by: Jose Nino <jnino@lyft.com>
fmt
Signed-off-by: Jose Nino <jnino@lyft.com>
Signed-off-by: Jose Nino <jnino@lyft.com>
fix
Signed-off-by: Jose Nino <jnino@lyft.com>
Signed-off-by: Jose Nino <jnino@lyft.com>
Signed-off-by: Jose Nino <jnino@lyft.com>
fmt
Signed-off-by: Jose Nino <jnino@lyft.com>
Signed-off-by: Jose Nino <jnino@lyft.com>
@junr03 junr03 marked this pull request as ready for review November 11, 2021 01:00
@@ -213,6 +214,8 @@ const char* config_template = R"(
stat_prefix: hcm
server_header_transformation: PASS_THROUGH
stream_idle_timeout: *stream_idle_timeout
http2_protocol_options:
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@alyssawilk after reading the flow control docs and based on the evidence I have I ended up with this change. The rest of it is Envoy Mobile plumbing to expose it to consumers but the real change is adding this config to the api listener. lmk what you think.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess my only thought was wondering if we want a consistent default for both HTTP/1 and HTTP/2?

Also don't you want to set the stream limit size for the upstream h2 protocol options? I think you're setting in the hcm so "downstream"

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

re: h1. I exposed the API as h2 explicit, but if we want to make it general to all http, I am not opposed.

re: hcm. Yes, only set it in the HCM because based on my reading of the code, docs, and log lines we get that is where the buffer is being blown. i.e., no individual data frame blows the watermark but the buffering that the platform bridge filter does is what blows the buffer limit in the filter manager.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ah I hadn't realized where the buffering was problematic. Yeah I think you're good then.

Signed-off-by: Jose Nino <jnino@lyft.com>
Signed-off-by: Jose Nino <jnino@lyft.com>
Copy link

@buildbreaker buildbreaker left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @junr03 !

Copy link
Contributor

@alyssawilk alyssawilk left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

workaround SGTM. cc @RyanTheOptimist for what reasonable defaults are for a client stack

@RyanTheOptimist
Copy link
Contributor

workaround SGTM. cc @RyanTheOptimist for what reasonable defaults are for a client stack

10mb seems reasonable to me. Chrome use a 15MB connection limit and a 6MB stream limit. This allows two streams to be full while still allowing the connection to remain unblocked. If I'm reading this PR correctly, 10MB will be the new stream limit. What is the connection limit?

@junr03
Copy link
Member Author

junr03 commented Nov 17, 2021

What is the connection limit?

@RyanTheOptimist connection limit default is 1mb, but that should be alright because the buffering is done by Envoy Mobile's Platform bridge filter not at the connection level.

@RyanTheOptimist
Copy link
Contributor

What is the connection limit?

@RyanTheOptimist connection limit default is 1mb, but that should be alright because the buffering is done by Envoy Mobile's Platform bridge filter not at the connection level.

If I'm understanding correctly, setting the stream limit to 10mb while the connection limit is 1mb won't do much, right? It seems like the stream limit should always be set to <= the connection limit. Should we consider increasing the connection limit as well? Or decreasing the stream limit to match? Or perhaps I'm misunderstanding.

@alyssawilk
Copy link
Contributor

in this case it's about making sure the configured buffer limits are higher, so I think setting H2 is sufficient.

``addH2StreamBufferLimitBytes``
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Specifies the limit in bytes that the engine will buffer for h2 streams when using Envoy filters
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually sorry coming back to this one more time, I think this doesn't work.
you're trying to set the buffer limits for what the HCM will buffer.
The HCM buffers based on the downstream "limits" which for H1 is connection limit and H2 is stream limit. But for E-M, Downstream isn't H1 or H2 it's our custom library. It looks like the limit for the DirectStream is hard-coded?

uint32_t bufferLimit() override { return 65000; }

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@alyssawilk that makes sense. But now I am confused about what the config I am setting here does then?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe you're changing what the buffer limit would be if we had an HTTP/2 downstream codec instead of the Envoy Mobile codec. Happy to sync up on slack if it'd help

I think to move forward here, you're going to want something like
https://github.com/envoyproxy/envoy/blob/main/test/integration/filters/stream_info_to_headers_filter.cc
for test/common/integration/client_integration_test.cc

It basically lets you convert things available to filters and stream info into headers. In this case you could log the filter's buffer limits as a header, and confirm my suspicion that tweaking the H2 downstream option doesn't actually change the limits derived from the hard-coded Envoy-Mobile codec.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanksgiving got in the way of this change. I will pick back up this week.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for all the help @alyssawilk

@junr03
Copy link
Member Author

junr03 commented Dec 23, 2021

Opening a PR with the test that @alyssawilk suggested: #1934 (comment)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants