Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ServiceAccounts #7101

Merged
merged 8 commits into from
May 12, 2015
Merged

ServiceAccounts #7101

merged 8 commits into from
May 12, 2015

Conversation

liggitt
Copy link
Member

@liggitt liggitt commented Apr 21, 2015

This is a first pass at ServiceAccount objects and associated tokens.

Overview:

  • ServiceAccount API objects (and client, printer, describer, etc)
  • ServiceAccountToken secret type (and describer)
  • ServiceAccountTokensController to manage ServiceAccountToken secrets
    • ensures a ServiceAccountToken secret exists for every ServiceAccount
    • ensures ServiceAccountToken secrets are deleted when the corresponding ServiceAccount is deleted
    • ensures the "token" key has a value set for every ServiceAccountToken
  • ServiceAccountsController to ensure a ServiceAccount (named "default") exists in every namespace
  • Admission plugin:
    • set "default" ServiceAccount on pods without a specified serviceAccount
    • reject pods which reference a serviceAccount that does not exist
    • optionally (enabled by default) mount the pod's service account's APISecret at a well-known location (/var/run/secrets/kubernetes.io/serviceaccount)
    • optionally (disabled by default for now) reject pods that have SecretVolumeSources that reference secrets the pod's service account doesn't reference
  • JWT token generator. Generates the following claims:
    • iss (issuer, kubernetes.io/serviceaccount)
    • sub (subject/username, serviceaccount:[namespace]:[service account name])
    • kubernetes.io/namespace (for authenticator to look up secret/service account)
    • kubernetes.io/service-account.name
    • kubernetes.io/service-account.uid
    • kubernetes.io/secret.name (for looking up secret to make sure token hasn't been revoked)
  • JWT token authenticator.
    • validates well-formedness
    • validates signature (supports trying multiple keys to allow signing key rotation)
    • validates issuer and presence of expected claims
    • optionally looks up secret
      • validates its name/existence name (ensures the token hasn't been revoked)
    • optionally looks up service account
      • validates the UID still matches
    • uses sub, kubernetes.io/service-account.uid claims from token to build user.Info

Short-term follow-ups:

  • Put all service accounts in a namespace in a group for ease of reference? - Add groups to service account user.Info #8607
  • Move JWT generation/signing into APIServer?
  • Sweep secret usage in scripts and examples to account for service account's referencing secrets, and then enable the admission plugin to limit pod secret usage to the pod's service account secrets
  • Performance test service account/secret lookups in Authenticator, maybe enable by default with a store in front of the lookups (and/or with an authenticator decision cache)

Longer-term follow-ups:

  • Include kubeconfig and API CA in addition to the token (requires knowing the CA bundle and hostname clients will need to use to contact the API... do we know that?) - Put Root CA Cert for cluster into all(ish) pods #7648
  • Work with @pweil- on interaction with securitycontext
  • Work with @pmorie to update Kubelet to filter secrets for a pod to the ones referenced by the pod's ServiceAccount (might want to expose /pods/[pod]/secrets to make this easier)

Sample usage

To create a service account:

serviceaccount.json:
{
    "kind": "ServiceAccount",
    "metadata": {
        "name": "myserviceaccount"
    }
}

$ kubectl create -f serviceaccount.json

To delete a service account:

$ kubectl delete serviceaccount/myserviceaccount

To create additional API tokens

A controller loop ensures a secret with an API token exists for each service account. To create additional API tokens for a service account, create a secret of type ServiceAccountToken with an annotation referencing the service account, and the controller will update it with a generated token:

secret.json:
{
    "kind": "Secret",
    "metadata": {
        "name": "mysecretname",
        "annotations": {
            "kubernetes.io/service-account.name": "myserviceaccount"
        }
    }
    "type": "kubernetes.io/service-account-token"
}

$ kubectl create -f secret.json
$ kubectl describe secret mysecretname

To delete/invalidate a service account token:

kubectl delete secret mysecretname

To set a pod's service account

Create or update the pod with the spec.serviceAccount field set to the name of a ServiceAccount. That field is set to the name of the default ServiceAccount if empty.

To mount the APISecret in containers

Ha! Trick step. You don't have to do anything... the ServiceAccount admission plugin automatically mounts the pod's spec.serviceAccount's ServiceAccountToken secret at /var/run/secrets/kubernetes.io/serviceaccount. That means containers can count on an API token existing at /var/run/secrets/kubernetes.io/serviceaccount/token

@googlebot
Copy link

Thanks for your pull request. It looks like this may be your first contribution to a Google open source project, in which case you'll need to sign a Contributor License Agreement (CLA).

📝 Please visit https://cla.developers.google.com/ to sign.

Once you've signed, please reply here (e.g. I signed it!) and we'll verify. Thanks.


  • If you've already signed a CLA, it's possible we don't have your GitHub username or you're using a different email address. Check your existing CLA data and verify that your email is set on your git commits.
  • If you signed the CLA as a corporation, please let us know the company's name.

@liggitt liggitt force-pushed the service_account branch 7 times, most recently from 467a6ce to 0b75524 Compare April 21, 2015 20:25
@liggitt
Copy link
Member Author

liggitt commented Apr 21, 2015

sorry for the massive cc, but a lot of you might be interested in this:
@erictune @roberthbailey @smarterclayton @pmorie @pweil- @ncdc

@smarterclayton
Copy link
Contributor

Really awesome

@pmorie
Copy link
Member

pmorie commented Apr 21, 2015

Haven't reviewed yet but may intersect what's being discussed in #6578 cc
@markturansky
On Tue, Apr 21, 2015 at 5:17 PM Clayton Coleman notifications@github.com
wrote:

Really awesome


Reply to this email directly or view it on GitHub
#7101 (comment)
.

func startComponents(etcdClient tools.EtcdClient, cl *client.Client, addr net.IP, port int) {
machineList := []string{"localhost"}

runApiServer(etcdClient, addr, port, *masterServiceNamespace)
runScheduler(cl)
runControllerManager(machineList, cl, *nodeMilliCPU, *nodeMemory)
runServiceAccountController(cl)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The controller is also being run from controllermanager.go. Is that correct?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nm, i see that's the standalone kubernetes binary.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

which means i need to put my PVC binder in the same place :) Glad I was reviewing this PR!

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, actually, I think this does get run twice.

You have this in controllermanager.go and that's run on line 165 above. Two controllers running.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm having a conversation with myself. runControllerManager runs the same controllers as controllermanager but they are not the same thing.

I suppose the question then is why a separate run method for your controller instead of rolling it into runControllerManager (which ideally wouldn't have duplication with controllermanager.go)?

// It must be of type SecretTypeServiceAccountToken
APISecret string `json:"apiSecret"`
// Secrets is the list of secrets allowed to be used by pods running using this ServiceAccount
Secrets []string `json:"secrets"`
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So, apiSecret is for this service account to talk to kubernetes/openshift apiserver, and secret is to talk to other things, like, say, the GCE API or the github API?

Should we call it KubeSecret? Or do you not want to leak the k word into OS API (understandable)? Other ideas?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What are your thoughts on tradeoffs on this design versus just having one array of secret names, where one of the entries is by convention a secret of type SecretTypeServiceAccountToken? It does work around the naming problem.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would some clusters ever have (I'm making this up but, hopefully you get the idea) a GithubSecretCrontroller which adds Github keys to pods based on a certain default Github project for the namespace. Or a GoogleCloudStorageController which adds GoogleCloudStorage bucket keys. Etc.

If that is the case, then those initializers will need to be allowed to edit the Secrets array. Which requires that the array have a patchStrategy:"merge" patchMergeKey:"foo".

What would foo be?

If we aren't sure what foo is, then I think this array needs to be a map.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The apiSecret field did feel messy... open to suggestions. My thinking in making it a special field:

  1. It is sort of special... the secret it refers to has to be of a certain type, and two different service accounts shouldn't reference the same ServiceAccountToken secret.
  2. Explicitly naming the secret that holds the ServiceAccount's token lets the admission controller that sets up the VolumeMount do so based only on the info in the ServiceAccount, without also listing/searching secrets.
  3. Avoids ambiguity if there are more than one ServiceAccountToken secret for a ServiceAccount (which is intentionally possible)

As to the name, serviceAccountTokenSecret felt too long (though that might be better than apiToken now that I think about it), and tokenSecret not descriptive enough. If the token is usable against things running on top of Kube with the ability to identify the ServiceAccount from it, calling it a KubeSecret felt too constraining.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would some clusters ever have ... which adds Github keys to pods ...

Sure. Controllers managing secrets (either to generate them, populate them, or rotate them) seems very reasonable to me.

Which requires that the array have a patchStrategy:"merge" patchMergeKey:"foo".

The patchStrategy stuff is new to me... I'll catch up and update

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is fairly new and explained somewhat in docs/api-conventions.md.

@erictune
Copy link
Member

Excited about this PR.

@erictune
Copy link
Member

I've been working on service accounts for system components (bootstrapping).
For that, I would like to create a service account and a token prior to when the controller has started, and have the admission controller attach the volume for me. Can you confirm that this will work?

@erictune
Copy link
Member

This is a perfect use case for integration testing.
Suggest you start with a copy of test/integration/auth_test.go, add a controller-manager thread, as is done in cmd/integration/integration.go, and then do holistic tests of system behavior.
Much faster than an e2e test.

@erictune
Copy link
Member

What is the story about updating a token and its dependent processes?

I guess the controller could just update the Secret when it is near expiry, and existing pods would keep running with their already mounted data, and newly created pods would use the new secret? Some other thing could make sure all the pods that used the old secret were re-made.

@liggitt
Copy link
Member Author

liggitt commented Apr 24, 2015

The service account tokens have to be more structured in order for the authenticator to look in the right namespace and secret to validate them. Assuming you generated them the same way the controller would, nothing would prevent you from doing so proactively. The admission plugin is cleanly separated from the account/token-creating controllers.

e.deleteSecret(newSecret)
}

// Handle case where they changed the type or annotations
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You need resourceVersion as well as UID to reliably detect changes, since these objects are not immutable.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm just making sure the old and new secret reference the same serviceAccount (both name and UID)... I don't think I care about the specific resourceVersion of the serviceaccount

@erictune
Copy link
Member

To ponder:
In a multi-cluster environment, does a pods need credentials to communicate with other apiservers? Maybe that is a special case, and you use an Opaque token in that case? Or maybe the controller syncs with a global database of tokens, with a single token working in all clusters? Not critical to figure out for this PR.

@erictune
Copy link
Member

@cjcullen

@liggitt
Copy link
Member Author

liggitt commented Apr 24, 2015

If the source cluster was one of the claims in the token, you could still do cross cluster validation. Alternately, you could expire tokens (not my preference for tokens given to machine accounts) and just verify the token's signature on the remote clusters.

@liggitt
Copy link
Member Author

liggitt commented Apr 24, 2015

The same mechanism that would let you have a single signing (private) key to sign the tokens and multiple verifying (public) keys would allow:

  • signing key rotation (keep the old public keys as verifying keys for some period of time)
  • third party verification of tokens without having the signing key
  • multi-cluster token sharing (each server has its own signer key for creating, and all public keys from all servers for verifying)

@liggitt liggitt force-pushed the service_account branch 2 times, most recently from 9656370 to b9ab5ce Compare April 27, 2015 13:36
@nikhiljindal nikhiljindal merged commit d75bd8b into kubernetes:master May 12, 2015
@liggitt
Copy link
Member Author

liggitt commented May 12, 2015

@nikhiljindal I've run e2e locally, but hadn't completed a GCE run with this branch yet... working on that today

@roberthbailey
Copy link
Contributor

Jenkins will probably beat you to it now that this is merged. :)

@deads2k
Copy link
Contributor

deads2k commented May 12, 2015

Jenkins will probably beat you to it now that this is merged. :)

Can you comment here on pass/fail, so I know whether to rebase on top of it or wait for the revert?

@liggitt
Copy link
Member Author

liggitt commented May 12, 2015

wait for the revert?

You're so supportive :)

@nikhiljindal
Copy link
Contributor

aah Sorry. I saw the status/LGTM tag on the PR and assumed that it was ready to merge :)

@roberthbailey
Copy link
Contributor

On GCE:

Ran 38 of 44 Specs in 1488.142 seconds
SUCCESS! -- 38 Passed | 0 Failed | 1 Pending | 5 Skipped I0512 10:53:34.645502    9079 driver.go:96] All tests pass

@roberthbailey
Copy link
Contributor

On GKE-CI:

Summarizing 1 Failure:

[Fail] ServiceAccounts [It] should mount an API token into pods 
/go/src/github.com/GoogleCloudPlatform/kubernetes/_output/dockerized/go/src/github.com/GoogleCloudPlatform/kubernetes/test/e2e/util.go:374

Ran 39 of 45 Specs in 1096.853 seconds
FAIL! -- 38 Passed | 1 Failed | 1 Pending | 5 Skipped F0512 11:02:03.234970   29396 driver.go:94] At least one test failed
2015/05/12 11:02:03 Error running Ginkgo tests: exit status 255

@roberthbailey
Copy link
Contributor

Looks like we need to add the ServiceAccount to ADMISSION_CONTROL for GKE.

@pmorie
Copy link
Member

pmorie commented May 12, 2015

@liggitt ^^

@liggitt
Copy link
Member Author

liggitt commented May 12, 2015

@roberthbailey I'm having a hard time telling where GKE-CI gets its ADMISSION_CONTROL env from... would that be fixed by #8127 or is that something you have to do on your side?

@roberthbailey
Copy link
Contributor

Fixing it on our side.

@liggitt
Copy link
Member Author

liggitt commented May 12, 2015

@roberthbailey ok... the two changes that were needed for GCE were:

  1. Adding ServiceAccount to ADMISSION_CONTROL:
    https://github.com/GoogleCloudPlatform/kubernetes/pull/7101/files#diff-57380bb03c3598646d5a3cb66518fa0bR79
  2. Adding a --service_account_private_key_file=<server-private-key> arg to kube-controller-manager:
    https://github.com/GoogleCloudPlatform/kubernetes/pull/7101/files#diff-65667c6e20fad19143da85de765ee1e3

Not sure how GKE starts up kube-controller-manager, so I thought I'd mention it

@roberthbailey
Copy link
Contributor

The second one gets picked up automatically. Changes to the bash scripts/config to create a cluster need to be mirrored in our provisioning code.

@roberthbailey
Copy link
Contributor

All fixed on our end: GKE-CI is now happy as well:

Ran 39 of 45 Specs in 1073.010 seconds
SUCCESS! -- 39 Passed | 0 Failed | 1 Pending | 5 Skipped I0512 14:34:45.343409    5177 driver.go:96] All tests pass

@nikhiljindal
Copy link
Contributor

./hack/local-up-cluster broke for me after syncing this change.

/tmp/kube-apiserver.log shows:

F0513 14:00:41.636446   23797 server.go:285] Invalid Authentication Config: open /var/run/kubernetes/serviceaccount.key: no such file or directory

After some debugging I found that
openssl genrsa -out /var/run/kubernetes/serviceaccount.key 2048 was failing with the following output:

/var/run/kubernetes/serviceaccount.key: No such file or directory
139781268178592:error:02001002:system library:fopen:No such file or directory:bss_file.c:398:fopen('/var/run/kubernetes/serviceaccount.key','w')
139781268178592:error:20074002:BIO routines:FILE_CTRL:system lib:bss_file.c:400:

I ran the following commands to fix it:

sudo mkdir /var/run/kubernetes
sudo touch /var/run/kubernetes/serviceaccount.key
sudo openssl genrsa -out /var/run/kubernetes/serviceaccount.key 2048

./hack/local-up-cluster works fine for me now.
Is this expected?

@liggitt
Copy link
Member Author

liggitt commented May 13, 2015

@nikhiljindal nope, opened #8174

@liggitt
Copy link
Member Author

liggitt commented May 13, 2015

worked for me locally because that folder existed from a previous run (it gets created later by the apiserver to hold self-signed certs/keys)

@pires
Copy link
Contributor

pires commented May 21, 2015

Was this merged in 0.17.0?

@roberthbailey
Copy link
Contributor

Nope. It missed the release by a day or so.

@pires
Copy link
Contributor

pires commented May 21, 2015

OK, tagging 0.17.1 has just fixed that. Thanks

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lgtm "Looks good to me", indicates that a PR is ready to be merged.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet