Join GitHub today
GitHub is home to over 31 million developers working together to host and review code, manage projects, and build software together.
Sign upDoes federate support request compression? #2469
Comments
This comment has been minimized.
This comment has been minimized.
|
Nevermind. I don't know why I didn't see the code section with that enabled before. Apologies. Appears my nginx configuration (providing basic auth) wasn't "proxying" the compression. |
bgeesaman
closed this
Mar 3, 2017
This comment has been minimized.
This comment has been minimized.
lock
bot
commented
Mar 23, 2019
|
This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs. |
lock
bot
locked and limited conversation to collaborators
Mar 23, 2019
Sign up for free
to subscribe to this conversation on GitHub.
Already have an account?
Sign in.
bgeesaman commentedMar 3, 2017
What did you do?
Attempting to scrape 10 targets of 40MB per /federate?metrics target every 30 seconds
What did you expect to see?
The requests to use compression
What did you see instead? Under which circumstances?
The requests don't appear to use compression based on the nginx logs (each cluster has a pod of nginx in front of promethus-core for logging)
Environment
Centos 7 running a standalone prometheus image prom/prometheus:latest scraping from the same version of prometheus inside multiple kube 1.5.3 clusters
Linux 3.10.0 el7 x86_64
Centos 7 running a standalone prometheus image prom/prometheus:latest
I am publishing each cluster's prometheus service behind an nginx container behind an ELB to allow for the federated prometheus system to scrape them. When I use curl without the --compressed flag, I see the /federate?metrics bring down a ~40MB file. With that flag, about 5MB. Searched issues/repo for "gzip" and "zip" but found no hard data to support this. Given that every 30s I'm scraping 400+MB but could be scraping ~50MB, it seems like a nice option.