Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Kubernetes Service Discovery sometimes gets the port wrong #3826

Closed
lorenz opened this Issue Feb 12, 2018 · 4 comments

Comments

Projects
None yet
3 participants
@lorenz
Copy link

lorenz commented Feb 12, 2018

What did you do?
I deployed a new exporter and created annotations for Prometheus

$ kubectl describe pod xyz
...
Annotations:    prometheus.io/port=9404
                prometheus.io/scrape=true
....

What did you expect to see?
Prometheus should scrape :9404/metrics and show the annotation __meta_kubernetes_pod_annotation_prometheus_io_port="9404" under Service Discovery.

What did you see instead? Under which circumstances?
I have 4 deployments where this works perfectly. This new one however just defaults to port 80. I manually addded/removed port and path annotations to see how it behaves. Setting the path annotation causes the path to update, setting/removing the port to anything doesn't do anything (no, not a typo, I checked). The SD site also showed a non-80 port, but not the correct one. After a Prometheus restart the SD annotation is now correct (__meta_kubernetes_pod_annotation_prometheus_io_port="9404" ), but it still scrapes port 80.

Kubernetes performed a master switch (the one log entry at the end), but Prometheus should be able to handle that. Not sure if related, but the only thing not related to TSDB in the logs.

Environment

  • System information:

    Linux 4.13-4.14 / CoreOS / Kubernetes 1.9.2

  • Prometheus version:

    version=2.1.0, branch=HEAD, revision=85f23d82a045d103ea7f3c89a91fba4a93e6367a

  • Prometheus configuration file:
    Stock Helm Kubernetes Config

  • Logs:

level=info ts=2018-02-04T17:44:45.970764918Z caller=kubernetes.go:191 component="discovery manager scrape" discovery=k8s msg="Using pod service account via in-cluster config"
level=info ts=2018-02-04T17:44:45.971798104Z caller=kubernetes.go:191 component="discovery manager scrape" discovery=k8s msg="Using pod service account via in-cluster config"
level=info ts=2018-02-04T17:44:45.972342563Z caller=kubernetes.go:191 component="discovery manager scrape" discovery=k8s msg="Using pod service account via in-cluster config"
level=info ts=2018-02-04T17:44:45.974142033Z caller=kubernetes.go:191 component="discovery manager scrape" discovery=k8s msg="Using pod service account via in-cluster config"
level=info ts=2018-02-04T17:44:45.974641411Z caller=kubernetes.go:191 component="discovery manager scrape" discovery=k8s msg="Using pod service account via in-cluster config"
level=info ts=2018-02-04T17:44:45.975267471Z caller=kubernetes.go:191 component="discovery manager scrape" discovery=k8s msg="Using pod service account via in-cluster config"
level=error ts=2018-02-10T08:52:21.296095929Z caller=main.go:221 component=k8s_client_runtime err="github.com/prometheus/prometheus/discovery/kubernetes/kubernetes.go:270: Failed to watch *v1.Pod: the server has asked for the client to provide credentials (get pods)"
@brian-brazil

This comment has been minimized.

Copy link
Member

brian-brazil commented Feb 12, 2018

It makes more sense to ask questions like this on the prometheus-users mailing list rather than in a GitHub issue. On the mailing list, more people are available to potentially respond to your question, and the whole community can benefit from the answers provided.

@gtaylor

This comment has been minimized.

Copy link

gtaylor commented Apr 11, 2018

@lorenz Did you end up finding an answer to this? Seeing the same set of symptoms with the upstream charts.

@lorenz

This comment has been minimized.

Copy link
Author

lorenz commented Apr 12, 2018

@gtaylor Yes, you need at least one port in the Kubernetes pod specification. That doesn't actually do anything but it needs to be there. Haven't investigated further, probably a stupid check somewhere.

@lock

This comment has been minimized.

Copy link

lock bot commented Mar 22, 2019

This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs.

@lock lock bot locked and limited conversation to collaborators Mar 22, 2019

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
You can’t perform that action at this time.