Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

server returned HTTP status 403 Forbidden when using recommended Kubelet config #5334

Closed
robbinscp opened this Issue Mar 11, 2019 · 5 comments

Comments

Projects
None yet
2 participants
@robbinscp
Copy link

robbinscp commented Mar 11, 2019

Bug Report

What did you do?

  • Deployed Prometheus per this workshop in EKS
  • Adjusted EKS BootstrapArguments to include webhook authentication ('--authentication-token-webhook --authorization-mode=Webhook)

What did you expect to see?

  • kubelet monitoring to work over HTTPS

What did you see instead? Under which circumstances?

  • HTTP 403 Forbidden errors from /metrics, /metrics-advisor
    image

Environment

  • System information:

    • AWS EKS running k8s v1.11.5
  • Prometheus version:

/prometheus $ prometheus --version
prometheus, version 2.7.1 (branch: HEAD, revision: 62e591f928ddf6b3468308b7ac1de1c63aa7fcf3)
  build user:       root@f9f82868fc43
  build date:       20190131-11:16:59
  go version:       go1.11.5
  • Prometheus ClusterRole:
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  name: kube-state-metrics
  namespace: monitoring
rules:
- apiGroups:
  - '*'
  resources:
  - '*'
  verbs:
  - '*'
- nonResourceURLs:
  - '*'
  verbs:
  - '*'
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kube-state-metrics
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: kube-state-metrics
subjects:
- kind: ServiceAccount
  name: kube-state-metrics
  namespace: monitoring
  • Logs:
kubectl logs prometheus-prometheus-0 prometheus -n monitoring
level=warn ts=2019-03-11T19:00:30.937934808Z caller=main.go:295 deprecation_notice="\"storage.tsdb.retention\" flag is deprecated use \"storage.tsdb.retention.time\" instead."
level=info ts=2019-03-11T19:00:30.93799031Z caller=main.go:302 msg="Starting Prometheus" version="(version=2.7.1, branch=HEAD, revision=62e591f928ddf6b3468308b7ac1de1c63aa7fcf3)"
level=info ts=2019-03-11T19:00:30.938007857Z caller=main.go:303 build_context="(go=go1.11.5, user=root@f9f82868fc43, date=20190131-11:16:59)"
level=info ts=2019-03-11T19:00:30.938026738Z caller=main.go:304 host_details="(Linux 4.14.97-90.72.amzn2.x86_64 #1 SMP Tue Feb 5 20:46:19 UTC 2019 x86_64 prometheus-prometheus-0 (none))"
level=info ts=2019-03-11T19:00:30.938045176Z caller=main.go:305 fd_limits="(soft=65536, hard=65536)"
level=info ts=2019-03-11T19:00:30.938061666Z caller=main.go:306 vm_limits="(soft=unlimited, hard=unlimited)"
level=info ts=2019-03-11T19:00:30.938605367Z caller=main.go:620 msg="Starting TSDB ..."
level=info ts=2019-03-11T19:00:30.938659047Z caller=web.go:416 component=web msg="Start listening for connections" address=0.0.0.0:9090
level=info ts=2019-03-11T19:00:30.943032894Z caller=main.go:635 msg="TSDB started"
level=info ts=2019-03-11T19:00:30.943067417Z caller=main.go:695 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml
level=info ts=2019-03-11T19:00:30.943888397Z caller=kubernetes.go:201 component="discovery manager notify" discovery=k8s msg="Using pod service account via in-cluster config"
level=info ts=2019-03-11T19:00:30.944479494Z caller=main.go:722 msg="Completed loading of configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml
level=info ts=2019-03-11T19:00:30.944498916Z caller=main.go:589 msg="Server is ready to receive web requests."
level=info ts=2019-03-11T19:00:34.648602919Z caller=main.go:695 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml
level=info ts=2019-03-11T19:00:34.64918768Z caller=kubernetes.go:201 component="discovery manager notify" discovery=k8s msg="Using pod service account via in-cluster config"
level=info ts=2019-03-11T19:00:34.649687284Z caller=main.go:722 msg="Completed loading of configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml
level=info ts=2019-03-11T19:00:34.650146982Z caller=main.go:695 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml
level=info ts=2019-03-11T19:00:34.650734755Z caller=kubernetes.go:201 component="discovery manager scrape" discovery=k8s msg="Using pod service account via in-cluster config"
level=info ts=2019-03-11T19:00:34.651383094Z caller=kubernetes.go:201 component="discovery manager notify" discovery=k8s msg="Using pod service account via in-cluster config"
level=error ts=2019-03-11T19:00:34.65139223Z caller=endpoints.go:130 component="discovery manager notify" discovery=k8s role=endpoint msg="endpoints informer unable to sync cache"
level=info ts=2019-03-11T19:00:34.652540215Z caller=main.go:722 msg="Completed loading of configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml
level=info ts=2019-03-11T19:01:57.248286374Z caller=main.go:695 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml
level=info ts=2019-03-11T19:01:57.249022871Z caller=kubernetes.go:201 component="discovery manager scrape" discovery=k8s msg="Using pod service account via in-cluster config"
level=info ts=2019-03-11T19:01:57.249606349Z caller=kubernetes.go:201 component="discovery manager notify" discovery=k8s msg="Using pod service account via in-cluster config"
level=info ts=2019-03-11T19:01:57.250648784Z caller=main.go:722 msg="Completed loading of configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml
level=info ts=2019-03-11T19:03:34.748902161Z caller=main.go:695 msg="Loading configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml
level=info ts=2019-03-11T19:03:34.750899939Z caller=kubernetes.go:201 component="discovery manager scrape" discovery=k8s msg="Using pod service account via in-cluster config"
level=info ts=2019-03-11T19:03:34.751607373Z caller=kubernetes.go:201 component="discovery manager scrape" discovery=k8s msg="Using pod service account via in-cluster config"
level=info ts=2019-03-11T19:03:34.752336608Z caller=kubernetes.go:201 component="discovery manager scrape" discovery=k8s msg="Using pod service account via in-cluster config"
level=info ts=2019-03-11T19:03:34.753107807Z caller=kubernetes.go:201 component="discovery manager notify" discovery=k8s msg="Using pod service account via in-cluster config"
level=info ts=2019-03-11T19:03:34.754345485Z caller=main.go:722 msg="Completed loading of configuration file" filename=/etc/prometheus/config_out/prometheus.env.yaml
level=warn ts=2019-03-11T19:16:58.781431599Z caller=klog.go:86 component=k8s_client_runtime func=Warningf msg="/app/discovery/kubernetes/kubernetes.go:300: watch of *v1.Endpoints ended with: too old resource version: 26099248 (26099697)"
level=warn ts=2019-03-11T19:17:26.781157913Z caller=klog.go:86 component=k8s_client_runtime func=Warningf msg="/app/discovery/kubernetes/kubernetes.go:300: watch of *v1.Endpoints ended with: too old resource version: 26099248 (26099809)"
level=warn ts=2019-03-11T19:17:45.765868465Z caller=klog.go:86 component=k8s_client_runtime func=Warningf msg="/app/discovery/kubernetes/kubernetes.go:300: watch of *v1.Endpoints ended with: too old resource version: 26099248 (26099879)"
level=warn ts=2019-03-11T19:33:03.792212439Z caller=klog.go:86 component=k8s_client_runtime func=Warningf msg="/app/discovery/kubernetes/kubernetes.go:300: watch of *v1.Endpoints ended with: too old resource version: 26102047 (26103092)"
level=warn ts=2019-03-11T19:34:07.776974476Z caller=klog.go:86 component=k8s_client_runtime func=Warningf msg="/app/discovery/kubernetes/kubernetes.go:300: watch of *v1.Endpoints ended with: too old resource version: 26102228 (26103316)"
level=warn ts=2019-03-11T19:34:48.799587521Z caller=klog.go:86 component=k8s_client_runtime func=Warningf msg="/app/discovery/kubernetes/kubernetes.go:300: watch of *v1.Endpoints ended with: too old resource version: 26102188 (26103485)"
level=warn ts=2019-03-11T19:39:44.778967143Z caller=klog.go:86 component=k8s_client_runtime func=Warningf msg="/app/discovery/kubernetes/kubernetes.go:302: watch of *v1.Pod ended with: too old resource version: 26099210 (26099631)"
level=warn ts=2019-03-11T19:44:01.789653614Z caller=klog.go:86 component=k8s_client_runtime func=Warningf msg="/app/discovery/kubernetes/kubernetes.go:302: watch of *v1.Pod ended with: too old resource version: 26099210 (26100459)"
level=warn ts=2019-03-11T19:44:15.80157809Z caller=klog.go:86 component=k8s_client_runtime func=Warningf msg="/app/discovery/kubernetes/kubernetes.go:302: watch of *v1.Pod ended with: too old resource version: 26099210 (26100624)```
@simonpasquier

This comment has been minimized.

Copy link
Member

simonpasquier commented Mar 11, 2019

You would need to file this issue to https://github.com/aws-samples/aws-workshop-for-kubernetes directly.

I'm closing it for now. If you have further questions, please use our user mailing list, which you can also search.

@robbinscp

This comment has been minimized.

Copy link
Author

robbinscp commented Mar 11, 2019

To be clear, this is a prometheus issue. There were changes I made to get the manifest to work, but the root of this issue is one between prometheus and the kubelet. I have followed the prometheus documentation on setup and related issues (coreos/prometheus-operator#1503) - I don't know understand why this is being closed that quickly.

@simonpasquier

This comment has been minimized.

Copy link
Member

simonpasquier commented Mar 11, 2019

You need to find what is wrong with your Prometheus configuration but GitHub issues aren't the proper venue. You'll have more chance on the users mailing list or IRC (#prometheus).

@robbinscp

This comment has been minimized.

Copy link
Author

robbinscp commented Mar 11, 2019

Understood. Thanks for the explanation - it is appreciated.

@simonpasquier

This comment has been minimized.

Copy link
Member

simonpasquier commented Mar 11, 2019

No problem, thanks for your understanding too :-)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.