Join GitHub today
GitHub is home to over 31 million developers working together to host and review code, manage projects, and build software together.
Sign upKubernetes client New() function has incorrect logic #2334
Comments
This comment has been minimized.
This comment has been minimized.
|
I think something like |
This comment has been minimized.
This comment has been minimized.
|
Thanks for reporting this @verdverm. I was able to reproduce your problem.
Does this resemble your results? Working on a fix now. |
This comment has been minimized.
This comment has been minimized.
|
Yes, that looks the same |
This comment has been minimized.
This comment has been minimized.
|
@verdverm The canonical approach to authenticating API clients running inside a cluster is to use a so-called service account and collect it's credentials via the Secret volume mounted into the pod. This is entirely managed by Kubernetes and described here: https://kubernetes.io/docs/admin/service-accounts-admin/ I'm almost done with a PR to implement the correct service-account behaviour and I would like to know more about your use case so I can test against it. Thanks a lot for helping out on this one. |
This comment has been minimized.
This comment has been minimized.
|
Didn't refresh – Alex actually asked the same already. |
fabxc
added this to the v1.5 milestone
Jan 17, 2017
This comment has been minimized.
This comment has been minimized.
|
I was messing around with config scenarios trying to get authz working in Prometheus. I've converted to the k8s supplied files and have the following error I've been trying to work out. deployment.yaml
configmap.yaml
|
This comment has been minimized.
This comment has been minimized.
|
@alexsomesan Do I need to create a service account specifically for Prometheus / kube-state-metrics / alertmanager ? |
This comment has been minimized.
This comment has been minimized.
|
You can start off with the default one, which is always there on every cluster that is version 1.4.x or newer. From the looks of your targets status I'd say some of you API instances are configured inconsistently. I suspect that is the reason why you only see one of them 'UP'. Only one of them accepts your credentials. Check logs of the kube-apiserver process on the failed nodes for authentication errors. and start from there. |
This comment has been minimized.
This comment has been minimized.
|
I've removed the tls/token fields from the The reason for the x509 errors I believe is the CA's are self-signed, is there a way around that? The systemd configuration, auth.jsonl, and token.csv are all the same on the three k8s-master nodes. Still getting the auth issues, the one up node is the elected leader. Also of interest, the TLS certs for the k8s masters are being accepted. These are from the same self-hosted CA. All machines have a unique TLS certificate. Thoughts? |
This comment has been minimized.
This comment has been minimized.
|
@alexsomesan x509 woes continue, adding changing anything in the auth sections seems to cause prometheus to be unable to talk to the API server. Yet it is not auth'n correctly |
This comment has been minimized.
This comment has been minimized.
|
I have to walk back another statement... removing the tls_config and bearer_token_file from the kubernetes_sd_config causes Prometheus to not be able to list anything. Putting the config back in resolves the unlistable issue @alexsomesan It seems the |
This comment has been minimized.
This comment has been minimized.
|
@alexsomesan If I have a unique TLS cert generated for each machine, does the |
This comment has been minimized.
This comment has been minimized.
|
ok, walk this back once again, both versions of the config map work. Depending on the sequence of create/apply/delete ConfigMap, Deployment, and the Prometheus pod determines success or not |
This comment has been minimized.
This comment has been minimized.
|
Is there anyway to output the actual config map prometheus is seeing? |
This comment has been minimized.
This comment has been minimized.
|
It seems we may also be suffering from stale data in k8s. After deleting and re-creating the ConfigMap, we kill the Prom pod to pick up the new config. It seems as though the new pod picks up the old config, after deleting a second time, it goes back to working with a known "working" config (despite our other authz issues). |
This comment has been minimized.
This comment has been minimized.
|
It seems that the |
This comment has been minimized.
This comment has been minimized.
|
oh, and the two
|
This comment has been minimized.
This comment has been minimized.
|
@alexsomesan , I got everything working!! post-mortem:
solution:
Most of the other auth setups worked, our final working configMap: notes:
|
This comment has been minimized.
This comment has been minimized.
fabxc
closed this
Jul 3, 2017
This comment has been minimized.
This comment has been minimized.
lock
bot
commented
Mar 23, 2019
|
This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs. |

verdverm commentedJan 10, 2017
What did you do?
Tried to use basic-auth with prometheus
What did you expect to see?
Prometheus up and running
What did you see instead? Under which circumstances?
An error from the k8s client code about specifying both bearer and basic auths
I believe the issue is a logical error in the k8s client config creation:
prometheus/discovery/kubernetes/kubernetes.go
Line 79 in d19d1bc
Bearer token can be set in the else clause (without any configuration in the yaml), and then basic-auth will be set, but the bearer never unset. This Config struct with two auth methods filled in is then passed to k8s client code, which returns an error.
Environment
System information:
quay.io docker image running in k8s 1.5.1
Prometheus version:
Starting prometheus (version=1.4.1, branch=master, revision=2a89e8733f240d3cd57a6520b52c36ac4744ce12)
Alertmanager version:
insert output of
alertmanager -versionhere (if relevant to the issue)Prometheus configuration file:
lost these, but... was the error coming from the k8s client library when both basic and bearer auth are specified