Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adding a configurable scrapeTimeout for prometheus operator. #539

Conversation

gmcquillan
Copy link
Contributor

Hi CoreOS team,

I couldn't find a comprehensive developers guide for ensuring this works. I figured I'd make a local build locally and test it out in our QA cluster. If there's something faster and easier than just running go test in the pkg/prometheus subdir, I'd love to hear about it.

Cheers!

@coreosbot
Copy link

Can one of the admins verify this patch?

@gmcquillan gmcquillan closed this Aug 3, 2017
@gmcquillan gmcquillan reopened this Aug 3, 2017
@coreosbot
Copy link

Can one of the admins verify this patch?

@brancz
Copy link
Contributor

brancz commented Aug 7, 2017

make test executes all unit tests, and you can execute the e2e tests on a local minikube by compiling the static binary (which is what is used for the container images) with make crossbuild and then build the container image with the docker host from within minikube by running eval $(minikube docker-env), then you can build the container using make container and then finally run the e2e tests using make e2e-tests.

Note that generally we will also run all of this on CI, however, we're having some trouble with our jenkins instances right now, should be working soon again though.

As a side note a section in the readme on developing might be helpful for future reference. Do you want to create a PR to add such a section 🙂 ?

@gmcquillan
Copy link
Contributor Author

gmcquillan commented Aug 7, 2017 via email

@fabxc
Copy link
Contributor

fabxc commented Aug 7, 2017

My opinion so far has been that the scrape timeout being as long as the scrape interval should generally be sane for all setups.
Can you elaborate on the exact use case that makes you want to set it explicitly?

@gmcquillan
Copy link
Contributor Author

gmcquillan commented Aug 7, 2017 via email

@brancz
Copy link
Contributor

brancz commented Aug 7, 2017

In that case I think the functionality from this PR #537 might already all we need. I'd personally also prefer not to bloat the Prometheus object itself.

@gmcquillan
Copy link
Contributor Author

Sure. That PR didn't exist when I began work on this branch. I'd be happy with either one getting merged.

I'd argue that making the API Spec attribute name match the configuration attribute in Prometheus is a good idea, however (i.e. s/timeout/scrape_timeout/).

@gmcquillan
Copy link
Contributor Author

Closing in favor of #537

@gmcquillan gmcquillan closed this Aug 7, 2017
@gmcquillan gmcquillan deleted the thread-prom-scrape-timeout-spec branch August 7, 2017 17:48
@carlosedp
Copy link
Contributor

carlosedp commented May 26, 2020

Any chance of getting this functionality into the operator? It would help setting the global timeout for the serviceMonitors that doesn't specify it.

A use case would be the vast amounts of RaspberryPi clusters where some scraping times-out due to low performance.

I can send a new PR addressing this.

Cc. @geerlingguy @brancz

@brancz
Copy link
Contributor

brancz commented May 27, 2020

I'm not entirely opposed to it, but I do have to wonder how much that will actually help you, as ServiceMonitors/PodMonitors can specify scrape timeouts themselves which will always take precedence.

@carlosedp
Copy link
Contributor

Many targets don't have scrape_timeout definitions like the core ones (kube-apiserver, kube-scheduler, kube-controller) so having the hability to change the default value is desired.

Opened #3250 to address this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants