Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

cannot use serviceaccount with token auth against rancher #14997

Closed
ghost opened this issue Aug 8, 2018 · 20 comments
Closed

cannot use serviceaccount with token auth against rancher #14997

ghost opened this issue Aug 8, 2018 · 20 comments

Comments

@ghost
Copy link

ghost commented Aug 8, 2018

Rancher versions:
rancher/rancher:2.0.6

Infrastructure Stack versions:
kubernetes (if applicable): v1.10.3-rancher2-1

Docker version: (docker version,docker info preferred)

$ docker info
Containers: 64
 Running: 32
 Paused: 0
 Stopped: 32
Images: 34
Server Version: 17.03.2-ce
Storage Driver: overlay2
 Backing Filesystem: xfs
 Supports d_type: true
 Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: bridge host macvlan null overlay
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 4ab9917febca54791c5f071a9d1f404867857fcc
runc version: 54296cf40ad8143b62dbcaa1d90e520a2136ddfe
init version: 949e6fa
Security Options:
 seccomp
  Profile: default
Kernel Version: 4.16.7-1.el7.elrepo.x86_64
Operating System: CentOS Linux 7 (Core)
OSType: linux
Architecture: x86_64
CPUs: 16
Total Memory: 94.4 GiB
Name: dockerblade-slot4-oben.ub.intern.example.com
ID: KXFV:3XKT:RY4N:SGZE:ZCNB:57PH:BLWT:H27S:K6OE:OVKA:UJLB:O3JE
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Http Proxy: http://proxy.example.com:3128
Https Proxy: http://proxy.example.com:3128
No Proxy: localhost,127.0.0.1,.example.com
Registry: https://index.docker.io/v1/
Experimental: false
Insecure Registries:
 127.0.0.0/8
Live Restore Enabled: false

WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabled

Operating system and kernel: (cat /etc/os-release, uname -r preferred)

$ uname -r
4.16.7-1.el7.elrepo.x86_64

Type/provider of hosts: (VirtualBox/Bare-metal/AWS/GCE/DO)
Bare-metal

Setup details: (single node rancher vs. HA rancher, internal DB vs. external DB)
single node rancher

Environment Template: (Cattle/Kubernetes/Swarm/Mesos)
Kubernetes

Steps to Reproduce:

  • create a serviceaccount and a role/rolebinding:
$ cat  | kubectl create -f - <<EOF
apiVersion: v1
kind: ServiceAccount
metadata:
  name: testaccount
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: testrole
rules:
- apiGroups: ["", "batch", "extensions", "apps"]
  resources: ["*"]
  verbs: ["*"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: testrolebinding
subjects:
- kind: ServiceAccount
  name: testaccount
roleRef:
  kind: Role
  name: testrole
  apiGroup: rbac.authorization.k8s.io
EOF
  • get the token of the account
$ kubectl get secret $(kubectl get serviceaccount testaccount -o jsonpath={.secrets[0].name}) -o jsonpath={.data.token} | base64 -d
  • use the token to perform cluster-operation

Results:

$ kubectl auth can-i get pods
error: You must be logged in to the server (the server has asked for the client to provide credentials (post selfsubjectaccessreviews.authorization.k8s.io))

Instead, when i use the kube-apiserver directly, it works:

$ kubectl --cluster k8s auth can-i get pods
yes

(the cluster is defined in my .kube/config)

@ghost
Copy link
Author

ghost commented Aug 8, 2018

Okay so the solution is to do the following steps:

Get the service account token:

root@massimo-server:~# TOKEN=$(kubectl get secret \
  $(kubectl get serviceaccount testaccount -o jsonpath={.secrets[0].name}) \
    -o jsonpath={.data.token} | base64 -d)

Get the service account cert:

root@massimo-server:~# kubectl get secret \
  $(kubectl get serviceaccount testaccount -o jsonpath={.secrets[0].name}) \
    -o json | jq -r '.data["ca.crt"]' | base64 -d > ca.crt

Configure cluster for kube config:

root@massimo-server:~# kubectl config set-cluster dev-cluster \
  --embed-certs=true \
  --server=https://206.189.64.94:6443/ \
  --certificate-authority=./ca.crt

Note: the IP address 206.189.64.94:6443 is the actual kubernetes API endpoint.
You can fetch this from Rancher API,

https://massimo.do.rancher.space/v3/clusters/c-gpn7w

Replace hostname and cluster id, there should be an entry called

"apiEndpoint": "https://206.189.64.94:6443",

You will need to use that endpoint when using a service account to talk to the kubernetes
cluster.

Set credentials for the service account:*

root@massimo-server:~# kubectl config set-credentials testaccount --token=$TOKEN

Set the service account context for kube config:

kubectl config set-context testaccount-sa \
   --cluster=dev-cluster \
   --user=testaccount \
   --namespace=default

Switch context to test:

root@massimo-server:~# kubectl config use-context testaccount-sa

Results:

root@massimo-server:~# kubectl auth can-i get pods
yes

root@massimo-server:~# kubectl get pods
NAME                     READY     STATUS    RESTARTS   AGE
ping1-57dc7755f4-7nhrb   1/1       Running   0          1d
ping1-57dc7755f4-gh8r5   1/1       Running   0          1d

@ghost ghost added kind/question Issues that just require an answer. No code change needd area/kubernetes version/2.0 area/rbac labels Aug 8, 2018
@ghost
Copy link
Author

ghost commented Aug 9, 2018

This is more a workaround than a solution. Imagine a scenario where your kube-apiserver is not accessible from where you need to use kubectl (but rancher is).

@mitchellmaler
Copy link

mitchellmaler commented Sep 26, 2018

I just ran into this as well. Currently we only restrict the API server access from the rancher server to make sure it is using rancher auth, etc. We have a few integrations that using jwt and service accounts but I don't have an easy way to give them access without opening up the API servers and creating a load balancer, etc. It would be awesome if rancher server can have a jwt auth pass through or another auth method and mapping to a clusters service account using jwt

@loganhz loganhz added status/need-follow-up and removed kind/question Issues that just require an answer. No code change needd labels Oct 23, 2018
@bclouser
Copy link

Hmm, so is there any way to access the kubernetes api directly? Or does this mean that all access requires Rancher authentication and indeed only using the rancher api urls?

My motivation for asking is that I have a container that tries to talk to the kubernetes API directly using the default serviceAccount token. (https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/#accessing-the-api-from-a-pod) This of course fails when I attempt to launch it under Rancher due to credentials.

@vincent99
Copy link
Contributor

Sure, you can talk to the Kubernetes API directly, by talking to the cluster directly at the endpoint it exposes. The ask here is for you to be able to make a request still to the Rancher server, but containing a token Rancher has no knowledge of.. and then proxy that request (including the token) to the target cluster so that it works (if that is a token the cluster can verify).

But the flip side of this is that removes a layer of protection and introduces direct exposure of all clusters to arbitrary requests from anyone that can reach the server container. Even if the cluster itself is not directly reachable from the outside world at all. Instead of the current behavior of only proxying through requests which have already been been authorized by a Rancher token. This does not seem like a very good tradeoff.

@bclouser
Copy link

Thanks @vincent99 for your response. Let me re-regurgitate my understanding and see if it matches.

The common API usage scenario: A user uses the API that rancher exposes which can be found by getting the kubectl file from the web UI for an individual cluster and user. This works well.

Use the kubernetes API directly (ignoring Rancher): A user extracts the API information of the underlying clusters that rancher has configured. One does this by accessing the Rancher container and extracting the cluster secrets which reveals the api server and the api token (apparently there is a feature for this now #13698). This also works well.

Access the kubernetes API from inside a pod: Running process inside a pod uses the service token and endpoints that are injected into every pod to access the cluster a la https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-api/#accessing-the-api-from-a-pod. This doesn't work.

Conclusions:
Accessing the API from a pod doesn't work under Rancher, and this is because Rancher doesn't understand the service account token used during the authentication? Is that right?
And lastly, there is no development effort to implement this because of the security concern regarding anyone being able to manipulate the server if they have access to one of the pods?

@vincent99
Copy link
Contributor

vincent99 commented Feb 14, 2019

What is it that you're actually trying to accomplish? There seems to be more than one concern getting confused together here.

If you're trying to talk to the native k8s (NOT Rancher) API for a cluster, from a pod running inside of that same cluster, then you just create a service account (or use the default one) and then talk to the API using it. Rancher/the server container is not really involved at all in this, it's a standard k8s feature.

If you're trying to use a native k8s serviceaccount token to authenticate through the Rancher server proxy and talk to either the Rancher or k8s APIs, that doesn't work because we only proxy authenticated connections and have no knowledge of that token (and is unlikely to change, as mentioned above).

In related news, 2.2 has (in alpha)/will have an option to copy Rancher API keys down to the cluster(s) they apply and the ability for the cluster to authenticate them itself directly instead of/in addition to going through the server proxy. This allows you to use the kubeconfig you get for a cluster or Rancher API key to talk directly to a cluster if desired. If you expose the cluster to the internet, create a load balancer and give us a FQDN for it then this becomes the default way of talking to that cluster. Otherwise the default stays going through Rancher but this still provides a backup mechanism so that you can reach the cluster even if the server is on fire.

@bclouser
Copy link

Yes, you are right. This is my bad. I thought Rancher was interfering with the authentication of the service account in my underlying kubernetes installation. I just didn't have the appropriate permissions set with my service-account. Sorry for the roundabout 😬 I appreciate the responses.

For posterity, this is what I did ( I know this is probably a bad idea):

kubectl create rolebinding serviceaccounts-admin --clusterrole=admin --serviceaccount=default:default --namespace=default

@cjellick
Copy link

looks resolved. going to close this issue

@pfyod
Copy link

pfyod commented Jul 12, 2019

@cjellick does not look resolved to me at all. It's still impossible (unless I'm missing something) to use ServiceAccounts via rancher proxy. Rancher could validate the token via the underlying cluster and forward / block requests based on that.

@vincent99
Copy link
Contributor

Service account credentials are not stored in the rancher server, are not going to be, and the server is not going to pass unauthenticated requests to a target cluster.

If you want to use native service accounts then you need to talk directly to the cluster, which as we mentioned 2.2 now has a mechanism to help with.

@goffinf
Copy link

goffinf commented Sep 24, 2019

So we use the K8s Auth method for accessing secrets in Hashicorp Vault. This uses the JWT token from the requesting pods configured service account which is authenticated using the token reviewer service in the API server. In this set up it is necessary to send requests directly to the API server (or an external LB sitting atop if you have a HA setup or just have it configured that way to make DNS easier). As @vincent99 suggests this could make for a less secure configuration, however, OTOH, the JWT token is scoped to one or more namespaces and the associated Vault role and policies can mean that the level of access available can be very fine grained indeed (i.e a single secret with read only capability). In general I prefer not to use Rancher's impersonation since this creates a dependency on the availability of Rancher itself which could impact our ability to manage deployments. Of course that can be mitigated by running HA and so forth so I'm not advocating that anyone else should do that, thats just our choice based on a number of factors.

@atsai1220
Copy link

atsai1220 commented Feb 28, 2020

For anyone interested, using the endpoint described by vincent99 exposed here worked for me https://rancher.com/docs/rancher/v2.x/en/cluster-provisioning/rke-clusters/options/#authorized-cluster-endpoint

Grab the Secret name for your SA
kc get sa -n <namespace> <SA name> -o yaml

Grab the cert from the SA's secret
kc get secret <Secret Name> -n <namespace> -o jsonpath="{.data['ca\.crt']}" | base64 -D

vault write auth/<your auth>/config 
kubernetes_host=https://<your apiEndpoint with port> 
kubernetes_ca_cert=@<absolute path to your k8s cert>

@jeremy-kaltenbach
Copy link

I am at a loss on how to use Rancher's authorized cluster endpoint. I am working on setting up a cluster with Ceph's CSI RBD plugin and Vault to generate/store keys for encrypting the volumes. I created a service account with access to the TokenReview API via kubectl and then set up the corresponding role in Vault. But when I try to follow @atsai1220 's example above to configure Vault, I am stuck getting a "permission denied" error when I try to hit Vault's /auth/kubernetes/login endpoint.
Is there a specific certificate and JWT that I need to use that's different than the ones from my service account? Or are there some other steps that I need to do for my service account?

@lkhomenk
Copy link

problem was fully solved. we have same case as @atsai1220 described but with EKS, Rancher's Authorized Cluster Endpoint doesn’t work for us. we had to specify cluster endpoint from AWS EKS

@atsai1220
Copy link

atsai1220 commented Mar 23, 2021

I am at a loss on how to use Rancher's authorized cluster endpoint. I am working on setting up a cluster with Ceph's CSI RBD plugin and Vault to generate/store keys for encrypting the volumes. I created a service account with access to the TokenReview API via kubectl and then set up the corresponding role in Vault. But when I try to follow @atsai1220 's example above to configure Vault, I am stuck getting a "permission denied" error when I try to hit Vault's /auth/kubernetes/login endpoint.
Is there a specific certificate and JWT that I need to use that's different than the ones from my service account? Or are there some other steps that I need to do for my service account?

You need provide your control node's IP at 6443 to Vault as kubernetes_host in order for Vault to talk directly to Kubernetes API.

Retrieve your cluster's kube-ca.pem from this command

kubectl run -it --rm --restart=Never busybox --image=busybox:1.28 -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt

If you want to use a layer-4 LB in front of your control nodes then you need to update your cluster config in Rancher to add your LB hostname to your certificate's subject alternative name list. Feel free to read more here: #26986

rancher_kubernetes_engine_config:
  authentication:
    sans:
      - my-virtual-cluster-name.my.domain

@glermaidt
Copy link

glermaidt commented Oct 14, 2021

@vincent99

tl;dr - Rancher provisioned k8s clusters DO NOT WORK with k8s cluster + Vault Integration

There is a scenario where this problem is directly affecting integration with the Hashicorp Vault "Kubernetes" auth engine. Which, as I understand it, requires a JWT that was created at some previous point from an existing service account created in the k8s cluster.

When a an application "X" running on a pod needs to login to Vault to get secrets, a Vault login request occurs to the auth/kubernetes auth method on the Vault server. The Vault k8s auth engine then uses a JWT that it's been previous created from an existing k8s service account, that the engine is authorized to use, for the purposes of authenticating to the k8s cluster API, then performing a token review ...apis/authentication.k8s.io/v1/tokenreviews of another service account's JWT token (for application "X") in the k8s cluster.

This is a real-world example of where a JWT auth is needed and is prescribed by Hashicorp Vault documentation. Since API auth seems to only be possible via Rancher-generated API tokens or other Rancher-supported auth methods, the JWT 'Bearer Token' provided by the Vault callback to k8s cluster fails because of "401 unauthorized errors."

JWT service account tokens are not unknown to the underlying k8s cluster, so from a trust perspective, it's not an issue. The k8s clusters operators are explicitly trusting the Vault cluster on purpose, to auth with that JWT. And a @goffinf has mentioned, is mitigated by limiting to specific namespaces and service-accounts. This should be an operator-decision, not a Rancher-one, IMO.

@vincent99
Copy link
Contributor

What's discussed here is (confusion with):

a) access to clusters that aren't (necessarily) otherwise exposed, via the Rancher server, instead of talking to them directly.

And b) access to clusters by talking to them directly, using credentials managed by Rancher (instead of service accounts managed by k8s)

But not c) a normal k8s service account taking directly to the cluster's endpoint, or to the k8s service from within the cluster.

You seem to be talking about "c", which we do nothing to interfere with AFAIK. Please make a new issue with whatever detail you can for your specific problem. Closed issues are not regularly monitored.

@cod-r
Copy link

cod-r commented Nov 18, 2022

4 years later and still no way to use rancher with vault via kubernetes auth

@goffinf
Copy link

goffinf commented Nov 18, 2022

@cod-r I left my previous employer a couple of years ago and it was there that we had Enterprise support from Rancher. I recall at that time discussing greater integration between Hashicorp Vault and Rancher with our Rancher Technical Account Manager and even had a call with the Engineering team at Rancher. It didn't get much further at that time and presumably it hasn't since. I somewhat agree with @vincent99 that the creation of the k8s vault-auth service account and the association of that account's JWT to the Vault k8s-auth backend is a bit beyond the scope of Rancher (though at the time it was a common enough amongst Enterprise customers to make use of Vault, hence the conversation with them).

I still run k8s clusters and continue to use the k8s-auth backend in Vault. I set up the service account and configure Vault as part of cluster post-provisioning which is all automated in the same CI pipeline that creates and configures many other aspects. Its relatively straight-forwards then to run the Vault agent as a sidecar (in caching mode) so that the (short-lived) token that is issued is regularly renewed so that pods retain access to Vault within the scope of the policies associated to whatever service account they run under.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests