Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

serviceaccounts: Add JWT KeyIDs to tokens #78502

Merged
merged 1 commit into from Aug 30, 2019

Conversation

@ahmedtd
Copy link
Contributor

commented May 29, 2019

This commit fills out the JWT "kid" (KeyID) field on most serviceaccount tokens we create. The KeyID value we use is derived from the public key of keypair that backs the cluster's OIDC issuer.

OIDC verifiers use the KeyID to smoothly cope with key rotations:

  • During a rotation, the verifier will have multiple keys cached from the issuer, any of which could have signed the token being verified. KeyIDs let the verifier pick the appropriate key without having to try each one.

  • Seeing a new KeyID is a trigger for the verifier to invalidate its cached keys and fetch the new set of valid keys from the identity provider.

The value we use for the KeyID is derived from the identity provider's public key by serializing it in DER format, taking the SHA256 hash, and then urlsafe base64-encoding it. This gives a value that is strongly bound to the key, but can't be reversed to obtain the public key, which keeps people from being tempted to derive the key from the key ID and using that for verification.

Tokens based on jose OpaqueSigners are omitted for now --- I don't see any way to actually run the API server that results in an OpaqueSigner being used.

/kind feature
/sig auth

Does this PR introduce a user-facing change?:

Service account tokens now include the JWT Key ID field in their header.
@k8s-ci-robot

This comment has been minimized.

Copy link
Contributor

commented May 29, 2019

Welcome @ahmedtd!

It looks like this is your first PR to kubernetes/kubernetes 🎉. Please refer to our pull request process documentation to help your PR have a smooth ride to approval.

You will be prompted by a bot to use commands during the review process. Do not be afraid to follow the prompts! It is okay to experiment. Here is the bot commands documentation.

You can also check if kubernetes/kubernetes has its own contribution guidelines.

You may want to refer to our testing guide if you run into trouble with your tests not passing.

If you are having difficulty getting your pull request seen, please follow the recommended escalation practices. Also, for tips and tricks in the contribution process you may want to read the Kubernetes contributor cheat sheet. We want to make sure your contribution gets all the attention it needs!

Thank you, and welcome to Kubernetes. 😃

@k8s-ci-robot

This comment has been minimized.

Copy link
Contributor

commented May 29, 2019

Hi @ahmedtd. Thanks for your PR.

I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot requested review from deads2k and enj May 29, 2019
@ahmedtd

This comment has been minimized.

Copy link
Contributor Author

commented May 29, 2019

/assign @liggitt

@enj

This comment has been minimized.

Copy link
Member

commented May 29, 2019

IMO a change like this requires more thought and possibly a KEP.

Also is there any real performance difference with trying all combinations when we have the token cache?

@ahmedtd

This comment has been minimized.

Copy link
Contributor Author

commented May 30, 2019

I'm happy to put together a KEP if you think it's necessary.

To add some context, this is part of the work we're (mikedanese, awly, alexcope), doing within Google to let Google IAM trust tokens issued by the cluster's OIDC issuer (GKE Workload Identity [1]).

The performance difference here is not for any component within the Kubernetes cluster, but for the OIDC relying party that is verifying the tokens (Google IAM in our case, but any other OIDC relying party should see the same benefit).

To verify a token, the relying party needs to fetch the identity provider's JWK public keys from it's well-known endpoint. There could be several keys advertised here (though there aren't in Kubernetes' case). To verify the token, each of the advertised keys needs to be tried, until the one that signed the token originally is found. Key IDs let the relying party skip these multiple expensive operations and jump directly to the correct key.

Additionally, the relying party caches the Identity Provider's public keys for a period of time, to avoid slamming the identity provider if with traffic proportional to the traffic the relying party is experiencing. Seeing a new Key ID triggers the relying party to dump its cache and re-fetch keys. (This behavior might be specific to Google IAM, but it seems like most relying parties would need to make a similar choice). Without this trigger, the cache TTL needs to be low enough to not cause a significant outage if the identity provider rotates its keys.

[1] https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity

@enj

This comment has been minimized.

Copy link
Member

commented May 31, 2019

I'm happy to put together a KEP if you think it's necessary.

Let us start with a discussion on the sig-auth biweekly meeting. This seems highly related to kubernetes/enhancements#704 and kubernetes/community#2314

To add some context, this is part of the work we're (mikedanese, awly, alexcope), doing within Google to let Google IAM trust tokens issued by the cluster's OIDC issuer (GKE Workload Identity [1]).

The performance difference here is not for any component within the Kubernetes cluster, but for the OIDC relying party that is verifying the tokens (Google IAM in our case, but any other OIDC relying party should see the same benefit).

To verify a token, the relying party needs to fetch the identity provider's JWK public keys from it's well-known endpoint. There could be several keys advertised here (though there aren't in Kubernetes' case). To verify the token, each of the advertised keys needs to be tried, until the one that signed the token originally is found. Key IDs let the relying party skip these multiple expensive operations and jump directly to the correct key.

Seems like you could simply re-use the token cache logic from kube to avoid this cost. I think you would want a short lived cache anyway.

Additionally, the relying party caches the Identity Provider's public keys for a period of time, to avoid slamming the identity provider if with traffic proportional to the traffic the relying party is experiencing. Seeing a new Key ID triggers the relying party to dump its cache and re-fetch keys. (This behavior might be specific to Google IAM, but it seems like most relying parties would need to make a similar choice). Without this trigger, the cache TTL needs to be low enough to not cause a significant outage if the identity provider rotates its keys.

You could just re-fetch the keys when none match the token right (with some smarts around how often you are willing to do that)?

[1] https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity


My general hesitation here is that we are expanding the API surface that we expose external components (and I am not sure we actually need to do that).

@ahmedtd

This comment has been minimized.

Copy link
Contributor Author

commented May 31, 2019

Let us start with a discussion on the sig-auth biweekly meeting.

Sounds good!

kubernetes/community#2314

You're right, I think this behavior needs to be part of this proposal. (The JWKS url is required to specify key IDs if it advertises multiple keys).

re-use the token cache logic from kube to avoid this cost. I think you would want a short lived cache anyway.

Can you point me to the code you're talking about? I've done a little bit of digging, but I can't figure out what you're referring to.

You could just re-fetch the keys when none match the token right (with some smarts around how often you are willing to do that)?

Yes, the key ID cache-dumping behavior originally seemed to be unnecessary to me, but it is pointed to in the OpenID Connect spec. I'm not totally sure of the rationale.

@mikedanese mikedanese self-assigned this Jun 1, 2019
// KeyID helps OIDC verifiers cope with key rotations. Making the
// derivation non-reversible makes it impossible for someone to
// accidentally obtain the real key from the key ID and use it for token
// validation.

This comment has been minimized.

Copy link
@lbragstad

lbragstad Jun 6, 2019

Even though the KeyID is generated using a one-way hash of the public key, is it useful for bad actors to collect JWTs they can assert were signed with the same private key?

This comment has been minimized.

Copy link
@ahmedtd

ahmedtd Jun 20, 2019

Author Contributor

It seems like that's already possible, with some effort — the cluster advertises its public keys, and it's easy to test that a given JWT was signed by a given key.

This comment has been minimized.

Copy link
@lbragstad

lbragstad Jun 20, 2019

Oh, interesting. Thanks for the additional information.

// accidentally obtain the real key from the key ID and use it for token
// validation.

publicKeyDERBytes, err := x509.MarshalPKIXPublicKey(&keyPair.PublicKey)

This comment has been minimized.

Copy link
@mikedanese

mikedanese Jun 12, 2019

Member

extract this to a function and share it between RSA and ECDSA

This comment has been minimized.

Copy link
@ahmedtd

ahmedtd Jun 20, 2019

Author Contributor

Done.

signer, err := jose.NewSigner(
jose.SigningKey{
Algorithm: alg,
Key: opaqueSigner,

This comment has been minimized.

Copy link
@mikedanese

mikedanese Jun 12, 2019

Member

Opaque signers should also forward key ids:

KeyID: opaqueSigner.Public().KeyID

Key rotation is going to be tricky though.

This comment has been minimized.

Copy link
@ahmedtd

ahmedtd Jun 20, 2019

Author Contributor

Done

This comment has been minimized.

Copy link
@ahmedtd

ahmedtd Jun 20, 2019

Author Contributor

Actually, I spoke too soon --- SigningKey doesn't have a KeyID field. Is it safe to change the Key field from opaqueSigner to a JSONWebKey that wraps opaqueSigner?

signer, err := jose.NewSigner(
    jose.SigningKey{
        Algorithm: alg,
        Key: &jose.JSONWebKey{
            Algorithm: alg,
            Key: opaqueSigner,
            KeyID: opaqueSigner.Public().KeyID,
            Use: "sig",
        },
    },
    nil
)
)

if err != nil {
return nil, errors.Wrapf(err, "failed to create signer")

This comment has been minimized.

Copy link
@mikedanese

mikedanese Jun 12, 2019

Member

please, not yet :) I'll learn what Wrapf does when it's in the stdlib (maybe go1.13).

This comment has been minimized.

Copy link
@ahmedtd

ahmedtd Jun 20, 2019

Author Contributor

Done. It looks like what's coming in 1.13 is not Wrapf, but instead a new format code for Errorf ("%w")

@mikedanese

This comment has been minimized.

Copy link
Member

commented Jun 12, 2019

Also is there any real performance difference with trying all combinations when we have the token cache?

This is primarily useful for relying parties of the service account issuer that cannot use TokenReview. A kid change signals that the verifying client needs to refetch keys.

@ahmedtd ahmedtd force-pushed the ahmedtd:jwt-keyid branch 3 times, most recently from f483f65 to aa26c1d Jun 20, 2019
@mikedanese mikedanese requested a review from micahhausler Jun 21, 2019
@ahmedtd

This comment has been minimized.

Copy link
Contributor Author

commented Jun 21, 2019

@enj We discussed this at the last sig-auth meeting. The only concerns that were raised were

  • ensure that opaque signers passed through their underlying key IDs
  • mention the key IDs in the OIDC Discovery Proposal
@mikedanese

This comment has been minimized.

Copy link
Member

commented Jun 29, 2019

Would it be easier to just at an extra header?

// Optional map of additional keys to be inserted into the protected header
// of a JWS object. Some specifications which make use of JWS like to insert
// additional values here. All values must be JSON-serializable.
ExtraHeaders map[HeaderKey]interface{}

@ahmedtd

This comment has been minimized.

Copy link
Contributor Author

commented Jun 30, 2019

Would it be easier to just at an extra header?

I can't tell what you're referring to here. Can you give me some more context?

func signerFromOpaqueSigner(opaqueSigner jose.OpaqueSigner) (jose.Signer, error) {
alg := jose.SignatureAlgorithm(opaqueSigner.Public().Algorithm)

signer, err := jose.NewSigner(

This comment has been minimized.

Copy link
@mikedanese

mikedanese Aug 27, 2019

Member

return jose.NewSigner. You decorate the error already above.

This comment has been minimized.

Copy link
@ahmedtd

ahmedtd Aug 28, 2019

Author Contributor

I feel like it's important for debuggability to decorate the errors at the point they enter the kubernetes codebase from our dependencies.

That way, someone trying to debug this code doesn't have to try to guess which library call the error they're seeing came out of.

KeyID: keyID,
Use: "sig",
}

signer, err := jose.NewSigner(

This comment has been minimized.

Copy link
@mikedanese

mikedanese Aug 27, 2019

Member

return jose.NewSigner

This comment has been minimized.

Copy link
@ahmedtd

ahmedtd Aug 28, 2019

Author Contributor

I feel like it's important for debuggability to decorate the errors at the point they enter the kubernetes codebase from our dependencies.

That way, someone trying to debug this code doesn't have to try to guess which library call the error they're seeing came out of.

t.Fatalf("Error checking for key ID: couldn't parse token: %v", err)
}

if jws.Signatures[0].Header.KeyID == "" {

This comment has been minimized.

Copy link
@mikedanese

mikedanese Aug 27, 2019

Member

This check seems unnecessary given the one below.

This comment has been minimized.

Copy link
@ahmedtd

ahmedtd Aug 28, 2019

Author Contributor

Done

This comment has been minimized.

Copy link
@ahmedtd

ahmedtd Aug 28, 2019

Author Contributor

I feel like it's important for debuggability to decorate the errors at the point they enter the kubernetes codebase from our dependencies.

That way, someone trying to debug this code doesn't have to try to guess which library call the error they're seeing came out of.

}

if jws.Signatures[0].Header.KeyID != expectedKeyID {
t.Fatalf("Token %q has the wrong KeyID (got %q, want %q)", jwsString, jws.Signatures[0].Header.KeyID, expectedKeyID)

This comment has been minimized.

Copy link
@mikedanese

mikedanese Aug 27, 2019

Member

t.Errorf?

This comment has been minimized.

Copy link
@ahmedtd

ahmedtd Aug 28, 2019

Author Contributor

Done

This commit fills out the JWT "kid" (KeyID) field on most
serviceaccount tokens we create.  The KeyID value we use is derived
from the public key of keypair that backs the cluster's OIDC issuer.

OIDC verifiers use the KeyID to smoothly cope with key rotations:

  * During a rotation, the verifier will have multiple keys cached
    from the issuer, any of which could have signed the token being
    verified.  KeyIDs let the verifier pick the appropriate key
    without having to try each one.

  * Seeing a new KeyID is a trigger for the verifier to invalidate its
    cached keys and fetch the new set of valid keys from the identity
    provider.

The value we use for the KeyID is derived from the identity provider's
public key by serializing it in DER format, taking the SHA256 hash,
and then urlsafe base64-encoding it.  This gives a value that is
strongly bound to the key, but can't be reversed to obtain the public
key, which keeps people from being tempted to derive the key from the
key ID and using that for verification.

Tokens based on jose OpaqueSigners are omitted for now --- I don't see
any way to actually run the API server that results in an OpaqueSigner
being used.
@ahmedtd ahmedtd force-pushed the ahmedtd:jwt-keyid branch from aa26c1d to b4e9958 Aug 28, 2019
@mikedanese

This comment has been minimized.

Copy link
Member

commented Aug 29, 2019

/lgtm
/approve

@k8s-ci-robot k8s-ci-robot added the lgtm label Aug 29, 2019
@mikedanese mikedanese added this to the v1.16 milestone Aug 29, 2019
@k8s-ci-robot

This comment has been minimized.

Copy link
Contributor

commented Aug 29, 2019

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: ahmedtd, mikedanese

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

// pick the correct key for verification when the identity party advertises
// multiple keys.
//
// Making the derivation non-reversible makes it impossible for someone to

This comment has been minimized.

Copy link
@enj

enj Aug 29, 2019

Member

Technically someone could just build the hash from the public key. But that would be silly to guard against.

This comment has been minimized.

Copy link
@mikedanese

mikedanese Aug 29, 2019

Member

heh, ya. good point. it's just an arbitrary value and the choice doesn't have security implications unless you make a really bad decision. This hash is nice because it's not going to change if the key is not rotated and (most likely) going to change when the key is rotated but that's probably the extent of it.

@enj

This comment has been minimized.

Copy link
Member

commented Aug 29, 2019

I only scanned the code, but including the key ID sounds fine to me. 👍

I do think we need to declare that all service account tokens are not opaque since using the key ID implies structure.

@mikedanese

This comment has been minimized.

Copy link
Member

commented Aug 29, 2019

I do think we need to declare that all service account tokens are not opaque since using the key ID implies structure.

Feel free to review the KEP so we can start making changes :). kubernetes/enhancements#1205

I'd like to see GA of TokenRequest in 1.17

@mikedanese

This comment has been minimized.

Copy link
Member

commented Aug 29, 2019

/retest

@k8s-ci-robot k8s-ci-robot merged commit f44d8f5 into kubernetes:master Aug 30, 2019
24 checks passed
24 checks passed
cla/linuxfoundation ahmedtd authorized
Details
pull-kubernetes-bazel-build Job succeeded.
Details
pull-kubernetes-bazel-test Job succeeded.
Details
pull-kubernetes-conformance-image-test Skipped.
pull-kubernetes-conformance-kind-ipv6 Skipped.
pull-kubernetes-cross Skipped.
pull-kubernetes-dependencies Job succeeded.
Details
pull-kubernetes-e2e-gce Job succeeded.
Details
pull-kubernetes-e2e-gce-100-performance Job succeeded.
Details
pull-kubernetes-e2e-gce-csi-serial Skipped.
pull-kubernetes-e2e-gce-device-plugin-gpu Job succeeded.
Details
pull-kubernetes-e2e-gce-iscsi Skipped.
pull-kubernetes-e2e-gce-iscsi-serial Skipped.
pull-kubernetes-e2e-gce-storage-slow Skipped.
pull-kubernetes-godeps Skipped.
pull-kubernetes-integration Job succeeded.
Details
pull-kubernetes-kubemark-e2e-gce-big Job succeeded.
Details
pull-kubernetes-local-e2e Skipped.
pull-kubernetes-node-e2e Job succeeded.
Details
pull-kubernetes-node-e2e-containerd Job succeeded.
Details
pull-kubernetes-typecheck Job succeeded.
Details
pull-kubernetes-verify Job succeeded.
Details
pull-publishing-bot-validate Skipped.
tide In merge pool.
Details
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.