-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
cannot use serviceaccount with token auth against rancher #14997
Comments
Okay so the solution is to do the following steps: Get the service account token:
Get the service account cert:
Configure cluster for kube config:
Note: the IP address 206.189.64.94:6443 is the actual kubernetes API endpoint. https://massimo.do.rancher.space/v3/clusters/c-gpn7w Replace hostname and cluster id, there should be an entry called "apiEndpoint": "https://206.189.64.94:6443", You will need to use that endpoint when using a service account to talk to the kubernetes Set credentials for the service account:*
Set the service account context for kube config:
Switch context to test:
Results:
|
This is more a workaround than a solution. Imagine a scenario where your kube-apiserver is not accessible from where you need to use kubectl (but rancher is). |
I just ran into this as well. Currently we only restrict the API server access from the rancher server to make sure it is using rancher auth, etc. We have a few integrations that using jwt and service accounts but I don't have an easy way to give them access without opening up the API servers and creating a load balancer, etc. It would be awesome if rancher server can have a jwt auth pass through or another auth method and mapping to a clusters service account using jwt |
Hmm, so is there any way to access the kubernetes api directly? Or does this mean that all access requires Rancher authentication and indeed only using the rancher api urls? My motivation for asking is that I have a container that tries to talk to the kubernetes API directly using the default serviceAccount token. (https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/#accessing-the-api-from-a-pod) This of course fails when I attempt to launch it under Rancher due to credentials. |
Sure, you can talk to the Kubernetes API directly, by talking to the cluster directly at the endpoint it exposes. The ask here is for you to be able to make a request still to the Rancher server, but containing a token Rancher has no knowledge of.. and then proxy that request (including the token) to the target cluster so that it works (if that is a token the cluster can verify). But the flip side of this is that removes a layer of protection and introduces direct exposure of all clusters to arbitrary requests from anyone that can reach the server container. Even if the cluster itself is not directly reachable from the outside world at all. Instead of the current behavior of only proxying through requests which have already been been authorized by a Rancher token. This does not seem like a very good tradeoff. |
Thanks @vincent99 for your response. Let me re-regurgitate my understanding and see if it matches. The common API usage scenario: A user uses the API that rancher exposes which can be found by getting the kubectl file from the web UI for an individual cluster and user. This works well. Use the kubernetes API directly (ignoring Rancher): A user extracts the API information of the underlying clusters that rancher has configured. One does this by accessing the Rancher container and extracting the cluster secrets which reveals the api server and the api token (apparently there is a feature for this now #13698). This also works well. Access the kubernetes API from inside a pod: Running process inside a pod uses the service token and endpoints that are injected into every pod to access the cluster a la https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-api/#accessing-the-api-from-a-pod. This doesn't work. Conclusions: |
What is it that you're actually trying to accomplish? There seems to be more than one concern getting confused together here. If you're trying to talk to the native k8s (NOT Rancher) API for a cluster, from a pod running inside of that same cluster, then you just create a service account (or use the default one) and then talk to the API using it. Rancher/the server container is not really involved at all in this, it's a standard k8s feature. If you're trying to use a native k8s serviceaccount token to authenticate through the Rancher server proxy and talk to either the Rancher or k8s APIs, that doesn't work because we only proxy authenticated connections and have no knowledge of that token (and is unlikely to change, as mentioned above). In related news, 2.2 has (in alpha)/will have an option to copy Rancher API keys down to the cluster(s) they apply and the ability for the cluster to authenticate them itself directly instead of/in addition to going through the server proxy. This allows you to use the kubeconfig you get for a cluster or Rancher API key to talk directly to a cluster if desired. If you expose the cluster to the internet, create a load balancer and give us a FQDN for it then this becomes the default way of talking to that cluster. Otherwise the default stays going through Rancher but this still provides a backup mechanism so that you can reach the cluster even if the server is on fire. |
Yes, you are right. This is my bad. I thought Rancher was interfering with the authentication of the service account in my underlying kubernetes installation. I just didn't have the appropriate permissions set with my service-account. Sorry for the roundabout 😬 I appreciate the responses. For posterity, this is what I did ( I know this is probably a bad idea):
|
looks resolved. going to close this issue |
@cjellick does not look resolved to me at all. It's still impossible (unless I'm missing something) to use ServiceAccounts via rancher proxy. Rancher could validate the token via the underlying cluster and forward / block requests based on that. |
Service account credentials are not stored in the rancher server, are not going to be, and the server is not going to pass unauthenticated requests to a target cluster. If you want to use native service accounts then you need to talk directly to the cluster, which as we mentioned 2.2 now has a mechanism to help with. |
So we use the K8s Auth method for accessing secrets in Hashicorp Vault. This uses the JWT token from the requesting pods configured service account which is authenticated using the token reviewer service in the API server. In this set up it is necessary to send requests directly to the API server (or an external LB sitting atop if you have a HA setup or just have it configured that way to make DNS easier). As @vincent99 suggests this could make for a less secure configuration, however, OTOH, the JWT token is scoped to one or more namespaces and the associated Vault role and policies can mean that the level of access available can be very fine grained indeed (i.e a single secret with read only capability). In general I prefer not to use Rancher's impersonation since this creates a dependency on the availability of Rancher itself which could impact our ability to manage deployments. Of course that can be mitigated by running HA and so forth so I'm not advocating that anyone else should do that, thats just our choice based on a number of factors. |
For anyone interested, using the endpoint described by vincent99 exposed here worked for me https://rancher.com/docs/rancher/v2.x/en/cluster-provisioning/rke-clusters/options/#authorized-cluster-endpoint Grab the Secret name for your SA Grab the cert from the SA's secret
|
I am at a loss on how to use Rancher's authorized cluster endpoint. I am working on setting up a cluster with Ceph's CSI RBD plugin and Vault to generate/store keys for encrypting the volumes. I created a service account with access to the TokenReview API via kubectl and then set up the corresponding role in Vault. But when I try to follow @atsai1220 's example above to configure Vault, I am stuck getting a "permission denied" error when I try to hit Vault's /auth/kubernetes/login endpoint. |
problem was fully solved. we have same case as @atsai1220 described but with EKS, Rancher's Authorized Cluster Endpoint doesn’t work for us. we had to specify cluster endpoint from AWS EKS |
You need provide your control node's IP at 6443 to Vault as Retrieve your cluster's
If you want to use a layer-4 LB in front of your control nodes then you need to update your cluster config in Rancher to add your LB hostname to your certificate's subject alternative name list. Feel free to read more here: #26986 rancher_kubernetes_engine_config:
authentication:
sans:
- my-virtual-cluster-name.my.domain |
tl;dr - Rancher provisioned k8s clusters DO NOT WORK with k8s cluster + Vault Integration There is a scenario where this problem is directly affecting integration with the Hashicorp Vault "Kubernetes" auth engine. Which, as I understand it, requires a JWT that was created at some previous point from an existing service account created in the k8s cluster. When a an application "X" running on a pod needs to login to Vault to get secrets, a Vault login request occurs to the auth/kubernetes auth method on the Vault server. The Vault k8s auth engine then uses a JWT that it's been previous created from an existing k8s service account, that the engine is authorized to use, for the purposes of authenticating to the k8s cluster API, then performing a token review This is a real-world example of where a JWT auth is needed and is prescribed by Hashicorp Vault documentation. Since API auth seems to only be possible via Rancher-generated API tokens or other Rancher-supported auth methods, the JWT 'Bearer Token' provided by the Vault callback to k8s cluster fails because of "401 unauthorized errors." JWT service account tokens are not unknown to the underlying k8s cluster, so from a trust perspective, it's not an issue. The k8s clusters operators are explicitly trusting the Vault cluster on purpose, to auth with that JWT. And a @goffinf has mentioned, is mitigated by limiting to specific namespaces and service-accounts. This should be an operator-decision, not a Rancher-one, IMO. |
What's discussed here is (confusion with): a) access to clusters that aren't (necessarily) otherwise exposed, via the Rancher server, instead of talking to them directly. And b) access to clusters by talking to them directly, using credentials managed by Rancher (instead of service accounts managed by k8s) But not c) a normal k8s service account taking directly to the cluster's endpoint, or to the k8s service from within the cluster. You seem to be talking about "c", which we do nothing to interfere with AFAIK. Please make a new issue with whatever detail you can for your specific problem. Closed issues are not regularly monitored. |
4 years later and still no way to use rancher with vault via kubernetes auth |
@cod-r I left my previous employer a couple of years ago and it was there that we had Enterprise support from Rancher. I recall at that time discussing greater integration between Hashicorp Vault and Rancher with our Rancher Technical Account Manager and even had a call with the Engineering team at Rancher. It didn't get much further at that time and presumably it hasn't since. I somewhat agree with @vincent99 that the creation of the k8s vault-auth service account and the association of that account's JWT to the Vault k8s-auth backend is a bit beyond the scope of Rancher (though at the time it was a common enough amongst Enterprise customers to make use of Vault, hence the conversation with them). I still run k8s clusters and continue to use the k8s-auth backend in Vault. I set up the service account and configure Vault as part of cluster post-provisioning which is all automated in the same CI pipeline that creates and configures many other aspects. Its relatively straight-forwards then to run the Vault agent as a sidecar (in caching mode) so that the (short-lived) token that is issued is regularly renewed so that pods retain access to Vault within the scope of the policies associated to whatever service account they run under. |
Rancher versions:
rancher/rancher:2.0.6
Infrastructure Stack versions:
kubernetes (if applicable): v1.10.3-rancher2-1
Docker version: (
docker version
,docker info
preferred)Operating system and kernel: (
cat /etc/os-release
,uname -r
preferred)Type/provider of hosts: (VirtualBox/Bare-metal/AWS/GCE/DO)
Bare-metal
Setup details: (single node rancher vs. HA rancher, internal DB vs. external DB)
single node rancher
Environment Template: (Cattle/Kubernetes/Swarm/Mesos)
Kubernetes
Steps to Reproduce:
Results:
Instead, when i use the kube-apiserver directly, it works:
(the cluster is defined in my
.kube/config
)The text was updated successfully, but these errors were encountered: