Http2 observability and understanding connection performance in hyper #3968
brent-statsig
started this conversation in
General
Replies: 1 comment
-
|
The bottleneck is here. You need to create an http2 instance for each core runtime of tokio to fully utilize the multi-core performance. Just like actix-rt |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
We've been using hyper for a custom proxy, and are starting to run into some bottlenecks. We've been trying to tune the available connection parameters for things like window size, stream concurrency, etc, but it feels like we are shooting in the dark a bit trying to understand where the bottleneck is occuring. Average instance is serving ~5-10k QPS, and ~100-500 MBps. We are using
tokioas our async runtime.service_fn- but that doesn't actually give us insight into the time taken to serialize + flush bytes through the connection. At P99 - we are seeing about a 10x difference between our service level metrics and load balancer metrics. Is there a hook or non-dirty way to get the true observed server e2e time for a request?Beta Was this translation helpful? Give feedback.
All reactions