-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use ServiceAccountToken volumes #3260
Comments
|
As for provider support, this is supported wherever we can specify the kindSupported, although not enabled by default. One needs to pass a config file such as: kind: Cluster
apiVersion: kind.sigs.k8s.io/v1alpha3
kubeadmConfigPatches:
- |
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
metadata:
name: config
apiServer:
extraArgs:
"service-account-issuer": "kubernetes.default.svc"
"service-account-signing-key-file": "/etc/kubernetes/pki/sa.key"MinikubeSupported, although not enabled by default. Minikube needs to be started with those flags enabled: minikube start --extra-config=apiserver.service-account-signing-key-file=/var/lib/minikube/certs/sa.key --extra-config=apiserver.service-account-issuer=kubernetes/serviceaccount --extra-config=apiserver.service-account-api-audiences=apiMicrok8sSupported, although not enabled by default. The flags need to be set in ...
--service-account-issuer=api
--service-account-signing-key-file=${SNAP_DATA}/certs/server.key
--service-account-api-audiences=apiDocker DesktopSupported, although not enabled by default. To enable first open a session against Docker's tty: screen ~/Library/Containers/com.docker.docker/Data/vms/0/ttyThen edit the kube-apiserver config: vi /etc/kubernetes/manifests/kube-apiserver.yamland append the flags to the Close the session with GKESupported, and enabled by default! AKSNot supported and there's no way to set it up. EKSNot supported and there's no way to set it up. Not through the console nor through eksctl. Edit: Completed Docker Desktop section |
|
Whelp, this sucks. Should we add a config option just for GKE or shelve this for now? |
|
I'm moving out of the release for now. Once this is more widely adopted it'd be great to do. |
|
AWS EKS now supports this by default since early september with kubernetes 1.14 and they encourage using projected service account tokens for IAM Role assumption. As for AKS, let's see how this issue gets resolved: Azure/AKS#1288 Azure/AKS#1208 |
|
This issue has been automatically marked as stale because it has not had recent activity. It will be closed in 14 days if no further activity occurs. Thank you for your contributions. |
|
AKS ETA is early February Azure/AKS#1208 (comment) So is support in GKE+EKS+AKS enough to have a go at this feature? |
|
@reegnz AKS has support now? Given the lack of support in minikube/microk8s, we'll need to degrade gracefully no matter what. Interested in taking up the contribution? |
|
@grampelberg I'm not that proficient in go yet, so I'll have someone else pick this up. :) |
|
Minikube won't start with: at least not version 1.8.2 |
|
Thanks @irizzant 👍 |
|
This issue has been automatically marked as stale because it has not had recent activity. It will be closed in 14 days if no further activity occurs. Thank you for your contributions. |
|
I recently came by the following solution for I'm not sure it completely address the problems here but figured it's worth mentioning. Reference: https://minikube.sigs.k8s.io/docs/handbook/addons/gcp-auth/ |
|
Hello @alpeb there's a type in
should be
|
|
Thanks @taman9333! I've updated the comment above 👍 |
|
Thanks for the patience everyone! With this being available by default in all the major cloud providers i.e GKE, AKS, EKS and also in kind, We can start building this for Linkerd I started reading up more on this, and me or someone else will post up a separate design issue in the very near future! |
Fixes #3260 ## Summary Currently, Linkerd uses a service Account token to validate a pod during the `Certify` request with identity, through which identity is established on the proxy. This works well and good, as Kubernetes attaches the `default` service account token of a namespace as a volume (unless overridden with a specific service account by the user). Catch here being that this token is aimed at the application to talk to the kubernetes API and not specifically for Linkerd. This means that there are [controls outside of Linkerd](https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#use-the-default-service-account-to-access-the-api-server), to manage this service token, which users might want to use, [causing problems with Linkerd](#3183) as Linkerd might expect it to be present. To have a more granular control over the token, and not rely on the service token that can be managed externally, [Bound Service Tokens](https://github.com/kubernetes/enhancements/tree/master/keps/sig-auth/1205-bound-service-account-tokens) can be used to generate tokens that are specifically for Linkerd, that are bound to a specific pod, along with an expiry. ## Background on Bounded Service Tokens This feature has been GA’ed in Kubernetes 1.20, and is enabled by default in most cloud provider distributions. Using this feature, Kubernetes can be asked to issue specific tokens for linkerd usage (through audience bound configuration), with a specific expiry time (as the validation happens every 24 hours when establishing identity, we can follow the same), bounded to a specific pod (meaning verification fails if the pod object isn’t available). Because of all these bounds, and not being able to use this token for anything else, This feels like the right thing to rely on to validate a pod to issue a certificate. ### Pod Identity Name We still use the same service account name as the pod identity (used with metrics, etc) as these tokens are all generated from the same base service account attached to the pod (could be defualt, or the user overriden one). This can be verified by looking at the `user` field in the `TokenReview` response. <details> <summary>Sample TokenReview response</summary> Here, The new token was created for the vault audience for a pod which had a serviceAccount token volume projection and was using the `mine` serviceAccount in the default namespace. ```json "kind": "TokenReview", "apiVersion": "authentication.k8s.io/v1", "metadata": { "creationTimestamp": null, "managedFields": [ { "manager": "curl", "operation": "Update", "apiVersion": "authentication.k8s.io/v1", "time": "2021-10-19T19:21:40Z", "fieldsType": "FieldsV1", "fieldsV1": {"f:spec":{"f:audiences":{},"f:token":{}}} } ] }, "spec": { "token": "....", "audiences": [ "vault" ] }, "status": { "authenticated": true, "user": { "username": "system:serviceaccount:default:mine", "uid": "889a81bd-e31c-4423-b542-98ddca89bfd9", "groups": [ "system:serviceaccounts", "system:serviceaccounts:default", "system:authenticated" ], "extra": { "authentication.kubernetes.io/pod-name": [ "nginx" ], "authentication.kubernetes.io/pod-uid": [ "ebf36f80-40ee-48ee-a75b-96dcc21466a6" ] } }, "audiences": [ "vault" ] } ``` </details> ## Changes - Update `proxy-injector` and install scripts to include the new projected Volume and VolumeMount. - Update the `identity` pod to validate the token with the linkerd audience key. - Added `identity.serviceAccountTokenProjection` to disable this feature. - Updated err'ing logic with `autoMountServiceAccount: false` to fail only when this feature is disabled. Signed-off-by: Tarun Pothulapati <tarunpothulapati@outlook.com>
What problem are you trying to solve?
Currently, service account tokens are used to validate identity for pods. These are passed as part of the CSR to
identity. In situations where the token is not mounted automatically, a pod's identity cannot be verified and identity must be disabled. This token also is needlessly over permissive and shouldn't be shipped around.How should the problem be solved?
A new type of volume - ServiceAccountToken reached beta in k8s 1.12. This is mounted on a per-container basis, can have a specific expiration and allows restriction by audience. Instead of relying on the default service account token to be auto-mounted, injection should add the volume to the proxy's pod and restrict to exactly the audience required.
This should not be a configuration option and instead be the only way to view service account tokens moving forward.
Concerns
While this feature reached beta in k8s 1.12, it is unclear that it is available in most cloud providers and local solutions (docker desktop, minikube). Before implementing, support for at least GKE, AKS, EKS, minikube and docker desktop should be validated.
The text was updated successfully, but these errors were encountered: