-
Notifications
You must be signed in to change notification settings - Fork 8.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add enable_compression scrape config option #13166
Add enable_compression scrape config option #13166
Conversation
Currently Prometheus will always request gzip compression from the target when sending scrape requests. HTTP compression does reduce the amount of bytes sent over the wire and so is often desirable. The downside of compression is that it requires extra resources - cpu & memory. This also affects the resource usage on the target since it has to compress the response before sending it to Prometheus. This change adds a new option to the scrape job configuration block: enable_compression. The default is true so it remains the same as current Prometheus behaviour. Setting this option to false allows users to disable compression between Prometheus and the scraped target, which will require more bandwidth but it lowers the resource usage of both Prometheus and the target. Fixes prometheus#12319. Signed-off-by: Łukasz Mierzwa <l.mierzwa@gmail.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, waiting for green
@@ -582,6 +583,8 @@ type ScrapeConfig struct { | |||
MetricsPath string `yaml:"metrics_path,omitempty"` | |||
// The URL scheme with which to fetch metrics from targets. | |||
Scheme string `yaml:"scheme,omitempty"` | |||
// Indicator whether to request compressed response from the target. | |||
EnableCompression bool `yaml:"enable_compression"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's usually awkward to add a flag that you have to set to get unchanged behaviour.
So, could this be disable_compression
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The value of this flag defaults to true
, so no, you don't have to set it to get the default behaviour. You need to explicitly set it to false
to get a different behaviour then the default one.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The default is True and I prefer positive flags over negative ones.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is not user visible, I prefer that for end user we use a enable_ flag.
Thanks! |
FYI I've disabled compression for all scrapes going over local network. Each Prometheus here scrapes the same set of 138K targets. Disabling scrape compression reduced memory usage by somewhere around 4-8%. pprof did confirm that all of these flate & gzip allocations were gone after it was all deployed:
|
@prymitive Have you tested any Go GC tuning? We found that for higher churn / larger Prometheus instances that lowering the default |
Yes, we’ve been running with GOGC=40 on all instances for a while now. |
I've just read this in the 2.49.0-rc.0 release notes, and one thing stuck out: is there a reason Purely from a config UX perspective, I think the latter better highlights that this option is exceptional. As noted in the original issue, the disable sense is also what others went with. If a new user was looking through the It also has the minor benefit that the Go zero value would be the default. Please let me know if I'm missing something! |
Defaulting to I do agree that it should be false by default, especially that all Prometheus docs I've seen recommend to scrape as close to the target as possible, which usually means local network. Gzip compression is mostly beneficial when scraping over the internet or any other "remote" network. For 9/10 scrape configs it's probably better to have compression off. |
It actually turns out in real world practice that compression, even on a local network, is better. We have a large node.js app that produces a lot of metrics. By fixing compression on the client side we reduced the network bandwidth used by Prometheus by 90%. It also reduced the scrape duration, since the scrape output size was reduced. This actually had a measurable improvement to the end user latency since node.js is singled threaded. Prometheus scrapes were increasing the end user latency. Gzip compression is CPU accelerated by modern servers. The trivial amount of cpu used by compression is very much worth the cost for most users. Hence why we never had to add such a feature in over 10 years of Prometheus being used. As for the flag naming, we prefer to be consistent in having "positively named" flags. Such that a confusing double negation is not required to enable a feature. This also makes it consistent across all feature and documentation. |
Thanks both. I definitely don't think compression should be disabled by default, even in 3.x. This was purely about a negative vs. positive parameter name, to encourage enabling compression. @SuperQ's consistency point makes sense, so if the flag is |
Thanks for challenging this, generally @gebn I would agree, but it's also a minor thing and a matter of taste ("negative" option looks weird if other settings are only "enabling" things VS reflecting default value). It was discussed here. We also don't have all the benefits of Go zero value here due to our explicit default struct flows. It's very easy to disable compression with explicit I am fine with both, but thanks to checking up! If we missed some rationales, we are keen on learning here (e.g. and changing it in 3.x) 👍🏽 |
It is pity that Prometheus decided using |
…me way as Prometheus does Updates prometheus/prometheus#13166 Updates prometheus/prometheus#12319 Do not document enable_compression option at docs/sd_configs.md, since vmagent already supports more clear disable_compression option - see https://docs.victoriametrics.com/vmagent/#scrape_config-enhancements
…me way as Prometheus does Updates prometheus/prometheus#13166 Updates prometheus/prometheus#12319 Do not document enable_compression option at docs/sd_configs.md, since vmagent already supports more clear disable_compression option - see https://docs.victoriametrics.com/vmagent/#scrape_config-enhancements
@valyala Yes, we prefer to avoid the confusion of double negatives. |
Currently Prometheus will always request gzip compression from the target when sending scrape requests. HTTP compression does reduce the amount of bytes sent over the wire and so is often desirable. The downside of compression is that it requires extra resources - cpu & memory.
This also affects the resource usage on the target since it has to compress the response before sending it to Prometheus.
This change adds a new option to the scrape job configuration block: enable_compression. The default is true so it remains the same as current Prometheus behaviour.
Setting this option to false allows users to disable compression between Prometheus and the scraped target, which will require more bandwidth but it lowers the resource usage of both Prometheus and the target.
Fixes #12319.