New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cannot scrap metrics : Unauthorized #13
Comments
I found my issue with k8s audit, |
Great! So the deployment went fine? All Prometheus targets OK? Had to change anything on the manifests? |
No I couldn't make it work at all with So I recreated my cluster with |
I believe having kube-rbac-proxy use the new API the most appropriate way. I think the lines are in https://github.com/brancz/kube-rbac-proxy/blob/1ebcc7b8cd227a80360e381562857751fde266e9/main.go#L177 and https://github.com/brancz/kube-rbac-proxy/blob/1ebcc7b8cd227a80360e381562857751fde266e9/main.go#L185. I never tested the monitoring stack on K3s but might give a try (and also rebuild kube-rbac-proxy in the process). What do you think @brancz / @ibuildthecloud, might be useful having the monitoring stack for K3s? |
I don't know enough about k3s to answer that, but kube-rbac-proxy also has a TLS mode, which should also work. Of course distributing certs may not actually be easier or even feasible. |
I faced the same issue in my k3s cluster and decided to take a closer look at making changes to kube-rbac-proxy. @carlosedp your comment about what code lines are correct but it goes a bit deeper. This code imports Compared to the k3s implementation that uses So I guess I need to change the dependencies and recompile the project for it to work with k3s? |
Guys, can you please check the k3s branch (https://github.com/carlosedp/cluster-monitoring/tree/k3s) where I removed the proxy from node_exporter and kube-state-metrics via overrides. Let me know if it worked. |
So deployed the changes in my cluster and most things seem to work, but I found the following errors. The kube-state-metrics service targets port http but the pod seems to expose the metrics on port 8080, causing the scraping to fail. Editing the service targetPort makes things work. The kubelet servicemonitor sets the port to http-metrics, while the kubelet service has the port https-metrics. So the port at schema in that servicemonitor is wrong. I need some more time to verify things an extra time to see if I have missed anything. I should probably test this on a clean cluster just to sure the error is not on my end. |
@phillebaba I just applied some changes fixing these errors you reported. Can you check last commit on k3s branch? |
There seemed to be some issues with the changes. apiVersion: v1
kind: Service
metadata:
labels:
k8s-app: kube-state-metrics
name: kube-state-metrics
namespace: monitoring
spec:
clusterIP: None
ports:
- name: http-main
port: 8080
targetPort: "8080"
- name: http-self
port: 8081
targetPort: "8081"
selector:
app: kube-state-metrics The only issue I have now is the kube-apiserver and kube-controller-manager servicemonitors. The reason those don't work is because they do not run in pods so the services point to nothing. I dont consider those errors a bug, but I will get back when I find a solution. My initial idea is to run a dummy pod on the master node that can expose the local ports on the node. |
Ah yes, check last commit. The ports were not exposed and named on the KSM pod. Might be fixed now. |
I just deployed a VM and installed K3s. Currently fixing most errors. Will have a working stack soon. |
Great! I will continue to verify the changes, but I wont be able to do it until after work tomorrow. |
Hey guys, last commit in k3s branch makes all metrics to be collected in Prometheus. Check Readme section https://github.com/carlosedp/cluster-monitoring/tree/k3s#customizing-for-k3s for more info. |
Hey Carlos, thanks for sorting out k3 issue. Most of the targets are now working for me. I'm still getting errors with following scrappers:
All of them are not reachable and getting this error: |
Yes, it is going out to a correct master node ip address but I think it's timing out because of slow response, I can hit one of the end point using curl. |
Yes, if curl reaches the IP:port for both services it should work. |
So I tried the changes on a clean cluster and all of the targets work with the new changes. The documentation was easy to understand, and everything just worked. I am guessing the solution with setting the master IP for the endpoints will become an issue when k3s supports HA, but that will be an issue for when it happens. Unless anyone else has found any issues I would say this is good enough to merge. Thanks for the great work @carlosedp ! |
Awesome, thanks for all the testing @phillebaba :) I was thinking about have two separate directories of pre-built manifests... one for K8s and another for K3s so people wouldn't need to rebuild it all from jsonnet. Ideas? |
I am personally fine with the docs and just generating the manifests myself. The Makefile does everything so its pretty simple. I guess its just a question about ease of use for others, so that they notice the its not just plug and play if your'e using a k3s cluster. |
Fixed thru #18. |
I created a cluster of 6 nodes with
k3s
using the first one as server and 5 others as agent.I followed your readme, did some change for my own
nfs
settings and finally built the manifests and applied them.But my prometheus instance cannot scrap
/metrics
when it's protected bykube-rbac-proxy
I tried to
curl
manually from the prometheus pod using the serviceAccount token to see if it was a prometheus configuration issue, but I found the same problem.Checking the log from
kube-rbac-proxy
I found :Did I forgot to do something ? Or is it maybe and issue with
k3s
itself ?The text was updated successfully, but these errors were encountered: