New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improving Performance on the API Gzip Handler #12363
Conversation
Signed-off-by: Alan Protasio <alanprot@gmail.com>
Signed-off-by: Alan Protasio <alanprot@gmail.com>
See also #10782 |
Nice! A similar PR, but here it's using the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it's worth it. We should also go on with the scraper side.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was a bit confused initially what is going on in this PR. There are two commits, but neither one gives any explanation and one only describes half of what was done.
I think I see:
- refactoring to move selection of compression from
newCompressedResponseWriter
toCompressionHandler.ServeHTTP
- renaming
compressedResponseWriter
since it now only does one kind of compression. - change from Go standard
gzip
toklauspost/compress
(this is in the PR description) - changing a test to use a bigger payload (unclear why).
- adding a benchmark
(PS you can clean up the "need >= 6 samples" in benchmark results by re-running with -count=6
)
util/httputil/compression.go
Outdated
@@ -14,11 +14,12 @@ | |||
package httputil | |||
|
|||
import ( | |||
"compress/gzip" | |||
"compress/zlib" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why not change zlib
to klauspost too?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah.. we could.. lets do it! :D
Signed-off-by: Alan Protasio <alanprot@gmail.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for updates. Couple more thoughts.
util/httputil/compression.go
Outdated
@@ -30,51 +31,31 @@ const ( | |||
|
|||
// Wrapper around http.Handler which adds suitable response compression based |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's not around http.Handler
, and doesn't look at headers.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
good catch! thanks a lot for the comments btw!
Signed-off-by: Alan Protasio <alanprot@gmail.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm, thanks!
Thanks!! :D |
…)" This reverts commit dfae954. Signed-off-by: Julien Pivotto <roidelapluie@o11y.eu>
Revert "Improving Performance on the API Gzip Handler (#12363)"
Using github.com/klauspost/compress package to replace the current Gzip Handler on the API. We see significant improvements using this handler over the current one as shown in the benchmark added. Also: * move selection of compression from `newCompressedResponseWriter` to `*CompressionHandler.ServeHTTP`. * renaming `compressedResponseWriter` since it now only does one kind of compression. Signed-off-by: Alan Protasio <alanprot@gmail.com>
…)" This reverts commit dfae954. Signed-off-by: Julien Pivotto <roidelapluie@o11y.eu>
Using
github.com/klauspost/compress
package to replace the current Gzip Handler on the API.We can see significant improvements using this handler over the current one as shown in the benchmark:
For instance: for a 4mb compressed response we can see that the compression time decreased from 1.1s to 237ms (-78.64%) and the memory allocated decreased from 26.6M to 5.5M (-78.95%) while the response size only increased from 4.863M to 4.904M (0.85%)
PS: This api is already being used on Thanos: thanos-io/thanos#6332