New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implements OIDC distributed claims. #63213
Conversation
This is to handle: #62920 |
7e91195
to
d68fd41
Compare
// | ||
// Example: | ||
// "groups" -> []string{"http://example1.com/foo","http://example2.com/bar"} | ||
IssuersPerClaim map[string][]string |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why shouldn't the issuer decide rather than the k8s oidc client? what does this solve?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm also for not adding new APIs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Two benefits, IMHO:
- Makes an explicit guarantee to the cluster owner that only issuers they explicitly approve of will ever be called to resolve distributed claims. Note that it's not a decision by the oidc client; but a restriction on the acceptable values.
- Makes it possible for us to initialize the ID token provider ahead of time.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ericchiang could you please clarify your comment? What new APIs are being added? Perhaps more importantly, what would you like to see instead?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What new APIs are being added?
New flags.
Makes an explicit guarantee to the cluster owner that only issuers they explicitly approve of will ever be called to resolve distributed claims. Note that it's not a decision by the oidc client; but a restriction on the acceptable values.
Users already put a lot of trust in their primary ID Token issuer. I feel like the benefit is small compared to adding more configuration nobs.
Let me think about this a little.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thinking this this a little more, the only distributed claims we should allow are the username
and groups claim (edit: or maybe even only allow it for the groups claim).
Could you explain why you think this is the case? (e.g. for example, only nonstandard claims are allowed to be distributed?)
I would not expect a distributed claim to be able to provide the expiry, for example.
Hm, I have a slightly different view, though I'm not an expert in the matter. It seems that a distributed claim response must provide the expiry.
Otherwise, it becomes similar to a long-lived credential, which could be stolen and replayed. With a timestamp, whatever is examining the credential has a way to determine if the credential is fresh or not and therefore if it can believe the claims. This seems like quite an important requirement from a security perspective.
If we're worried about what remote addresses we can consult for this maybe we whitelist issuers?
That would work.
However, when would an issuer provide a distributed claim that an admin wouldn't be comfortable with? If the admin is picking a username or group claim they're surely aware of if that claim can be distributed and where it'd be distributed to.
I think from an auditability perspective it is much stronger to claim "we know exactly what calls our API server is making" than "our API server may be redirected to wherever our IDP says".
But if you think that's not a big concern, I'll be happy to oblige.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would not expect a distributed claim to be able to provide the expiry, for example.
Hm, I have a slightly different view, though I'm not an expert in the matter. It seems that a distributed claim response must provide the expiry.
I think you're talking about two different things. First, we verify the presented id_token, including its iss
, exp
, nbf
claims. That is the exp
claim Eric was saying he wouldn't expect to be able to be distributed.
Once verified, we look for the username and group claims. Those could be distributed, and if they are, according to the spec, the response from the distributed endpoint is another JWT that must be verified (including iss
, and potentially its own exp
, and nbf
claims)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
First, we verify the presented id_token, including its iss, exp, nbf claims. That is the exp claim Eric was saying he wouldn't expect to be able to be distributed.
Ah I see. Indeed, not all claims may be named in _claim_names
. In this implementation, a normal claim always wins if there is a claim name clash, which means exp
will never be resolved through _claim_sources
. But, let's make that statement even stronger, then.
To summarize my understanding of what would be acceptable to you:
- Only allow
groups
claim to be distributed (that is, whatever the value has been set in the OIDC configuration to mean groups). - Require admins to specify
--oidc-allowed-distributed-claims-issuer
to list all acceptable issuers. - Introduce
--oidc-disallow-distributed-claims
as an escape hatch.
What do you think?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Only allow groups claim to be distributed (that is, whatever the value has been set in the OIDC configuration to mean groups).
+1
Require admins to specify --oidc-allowed-distributed-claims-issuer to list all acceptable issuers.
-1
I still think that admins are actually more tolerant of requests to different sites than you expect. accounts.google.com
returns a keys URL hosted under www.googleapis.com
, for example.
(edit: this is also a restriction we could add later in a backwards compatible way)
Introduce --oidc-disallow-distributed-claims as an escape hatch.
I'd like to see someone request this behavior first.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK, I scaled back the implementation to support only groups claim resolution through distributed claims. Also removed all flags and options. Better?
@kubernetes/sig-auth-pr-reviews |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A bunch of stylistic comments but deferring to Eric/Mike on the overall design
@@ -230,6 +242,9 @@ func (s *BuiltInAuthenticationOptions) AddFlags(fs *pflag.FlagSet) { | |||
"A key=value pair that describes a required claim in the ID Token. "+ | |||
"If set, the claim is verified to be present in the ID Token with a matching value. "+ | |||
"Repeat this flag to specify multiple claims.") | |||
|
|||
fs.Var(flag.NewColonSeparatedMultimapStringString(&s.OIDC.IssuersPerClaim), "oidc-distributed-claims-issuers-per-claim", ""+ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove ""+ at the end
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done.
@@ -8,6 +8,7 @@ load( | |||
|
|||
go_test( | |||
name = "go_default_test", | |||
size = "small", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Indent with spaces (I think /hack/update-bazel.sh
should reformat this)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hm, on a second thought, let's revert this change altogether.
} | ||
|
||
// initVerifier creates a new ID token verifier for the given configuration and issuer URL. On success, calls setVerifier with the | ||
// resulting verifier. Returns true in case of success, or non-nil error in case of an irrecoverable error. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The last sentence is redundant.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done.
// Polls indefinitely in an attemt to initialize the distributed claims | ||
// verifier, or until context canceled. | ||
sync := make(chan struct{}) | ||
initFn := func() (done bool, err error) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
err is always nil. Should this be func() bool
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It has to conform to wait.PollUntil
API.
return true, nil | ||
|
||
} | ||
go func() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why not simply go wait.PollUntil(time.Second*10, initFn, ctx.Done())
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
wait.PollUntil
waits for 10 seconds before it executes initFn. I'd prefer to not have to wait if possible. The package wait
does not seem to have a function that both has a done channel and executes first before pause, as far as I could see.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see. You could use wait.PollImmediateInfiite and move return ctx.Err() from initFn to catch cancelled context
return fmt.Errorf("oidc: no claim sources") | ||
} | ||
|
||
var sources claims |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can this be map[string]endpoint
so you don't have to unmarshal endpoints in the loop below?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Lo and behold, it can!
Done. :)
defer cancel() | ||
|
||
// TODO: Allow passing request body with configurable information. | ||
req, _ := http.NewRequest("GET", url, nil) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
handle the error
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done.
if accessToken != "" { | ||
req.Header.Set("Authorization", fmt.Sprintf("Bearer %v", accessToken)) | ||
} | ||
req.WithContext(ctx) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
req = req.WithContext(ctx)
response, err := client.Do(req) | ||
if err != nil { | ||
return "", err | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
defer response.Body.Close()
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done.
}() | ||
|
||
// Let tests wait until the verifier is initialized. | ||
if synchronizeVerifierForTest { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm, I think you can skip the channel and just call wait.PollUntil without a goroutine if the flag is true.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, there are multiple ways of achieving this effect. I opted for a way that minimally modifies the execution path in and out of test.
d68fd41
to
9948d81
Compare
/ok-to-test |
return "", err | ||
} | ||
responseBytes, err := ioutil.ReadAll(response.Body) | ||
defer response.Body.Close() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Move this before ReadAll. It won't make Reads fail
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there any specific reason to prefer that idiom?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It follows the general pattern of
r, err := allocateResource()
if err != nil { return err }
defer cleanupResource(r)
If you keep it as is, every reader will think there's some non-obvious semantics and spend time trying to understand the reasoning behind it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh, I see. I was thinking more along the lines of "the first use of response.Body
is in line N
". So it made sense to defer close in line N+1
.
Whereas, in fact, the resource was allocated at ... := client.Do(...)
.
9948d81
to
7d17c32
Compare
/retest
…On Thu, Apr 26, 2018 at 6:59 PM k8s-ci-robot ***@***.***> wrote:
@filmil <https://github.com/filmil>: The following test *failed*, say
/retest to rerun them all:
Test name Commit Details Rerun command
pull-kubernetes-e2e-kops-aws 7d17c32
<7d17c32>
link
<https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/pr-logs/pull/63213/pull-kubernetes-e2e-kops-aws/86145/> /test
pull-kubernetes-e2e-kops-aws
Full PR test history <https://k8s-gubernator.appspot.com/pr/63213>. Your
PR dashboard <https://k8s-gubernator.appspot.com/pr/filmil>. Please help
us cut down on flakes by linking to
<https://git.k8s.io/community/contributors/devel/flaky-tests.md#filing-issues-for-flaky-tests>
an open issue
<https://github.com/kubernetes/kubernetes/issues?q=is:issue+is:open> when
you hit one in your PR.
Instructions for interacting with me using PR comments are available here
<https://git.k8s.io/community/contributors/devel/pull-requests.md>. If
you have questions or suggestions related to my behavior, please file an
issue against the kubernetes/test-infra
<https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:>
repository. I understand the commands that are listed here
<https://go.k8s.io/bot-commands>.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#63213 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AAPDMBDoS_exccuAnxdbiFjRh5ZYBSEsks5tsnuHgaJpZM4TnR-->
.
|
/retest |
85c0d8d
to
8382197
Compare
/retest
…On Fri, Apr 27, 2018 at 1:30 AM k8s-ci-robot ***@***.***> wrote:
@filmil <https://github.com/filmil>: The following test *failed*, say
/retest to rerun them all:
Test name Commit Details Rerun command
pull-kubernetes-e2e-kops-aws 8382197
<8382197>
link
<https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/pr-logs/pull/63213/pull-kubernetes-e2e-kops-aws/86227/> /test
pull-kubernetes-e2e-kops-aws
Full PR test history <https://k8s-gubernator.appspot.com/pr/63213>. Your
PR dashboard <https://k8s-gubernator.appspot.com/pr/filmil>. Please help
us cut down on flakes by linking to
<https://git.k8s.io/community/contributors/devel/flaky-tests.md#filing-issues-for-flaky-tests>
an open issue
<https://github.com/kubernetes/kubernetes/issues?q=is:issue+is:open> when
you hit one in your PR.
Instructions for interacting with me using PR comments are available here
<https://git.k8s.io/community/contributors/devel/pull-requests.md>. If
you have questions or suggestions related to my behavior, please file an
issue against the kubernetes/test-infra
<https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:>
repository. I understand the commands that are listed here
<https://go.k8s.io/bot-commands>.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#63213 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AAPDMOvrmr4GD19_Zl2zODxC2D-MrButks5tstcIgaJpZM4TnR-->
.
|
) | ||
|
||
if glog.V(5) { | ||
glog.Infof("initial claim set: %v", c) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is going to be really verbose... Just print the distributed claim you find in the resolve() method?
glog.Infof("resolved distributed claim %s from endpoint %s: %v", claimName, endpoint, claimValue)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also if you print all the claims you'll print endpoint access tokens.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, I meant this to be logged only in tests, as they give insight into what happened. Let's remove since it's contentious.
claim string | ||
|
||
// A HTTP transport to use for distributed claims | ||
t *http.Transport |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: this should be an http.Client
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
if av == nil { | ||
// This lazy init should normally be very quick. | ||
client := &http.Client{Transport: r.t, Timeout: 30 * time.Second} | ||
ctx := oidc.ClientContext(context.Background(), client) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: would be nice if this context was cancel-able.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hm, there isn't any place to cancel it from. Any suggestions?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
adding a TODO is fine for now
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done.
// resolve requests distributed claims from all endpoints passed in, | ||
// and inserts the lookup results into allClaims. | ||
func (r *claimResolver) resolve(endpoint endpoint, allClaims claims) error { | ||
// TODO: cache resolved claims. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd think this has to be resolved before we turn on this functionality. These tokens are evaluated on every HTTP request. It's going to be incredibly slow if we leave this as a TODO.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agreed, but it seemed like it would better be placed in a separate commit. What do you advise? I'm fine with adding more code to this commit, but it's already large as it is.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add a private field to the config to let the tests enable this feature, and disable it by default?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good idea! Done.
w.Header().Set("Content-Type", "application/json") | ||
glog.V(5).Infof("%v: returning: %+v", r.URL, *openIDConfig) | ||
w.Write([]byte(*openIDConfig)) | ||
default: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: default should be a 404 with an error
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
} | ||
v, err := r.Verifier(untrustedIss) | ||
if err != nil { | ||
glog.Errorf("verifier: %v", err) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it'd be better to return an error if you can't reach the distributed claim provider. Users are going to get weird authorization errors otherwise. But I'd be fine if we want to keep it this way.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done.
} | ||
value, ok := distClaims[r.claim] | ||
if !ok { | ||
return fmt.Errorf("could not find distributed claim: %v", r.claim) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We're going to live and die by good error messages here :P
fmt.Errorf("jwt returned by distributed claim endpoint %s did not contain claim: %v", endpoint, r.claim)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agreed. Changed.
ctx, cancel := context.WithCancel(context.Background()) | ||
defer cancel() | ||
|
||
// TODO: Allow passing request body with configurable information. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this actually part of the spec?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Azure OIDC IDP supports this. IIUC the spec says nothing either way.
} | ||
ep, ok := sources[src] | ||
if !ok { | ||
return fmt.Errorf("oidc: malformed claim sources") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fmt.Errorf("id token _claim_names contained a source %s missing in _claim_sources", src)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
@@ -381,3 +684,11 @@ func (c claims) hasClaim(name string) bool { | |||
} | |||
return true | |||
} | |||
|
|||
func (c claims) String() string { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think we should ever print all the claims. It might contain sensitive information like access tokens.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
@@ -104,6 +112,73 @@ type Options struct { | |||
|
|||
// now is used for testing. It defaults to time.Now. | |||
now func() time.Time | |||
|
|||
// TODO: remove this field once caching is implemented. | |||
enableDistributedClaims bool |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
woops, you need to remove this
/hold |
Ow! Good catch. I do remember removing it, perhaps at some point I hit "undo" one too many times. Please take another look @ericchiang . Sorry for the ruckus. |
/lgtm |
/hold cancel |
/retest
…On Tue, May 1, 2018 at 12:30 PM k8s-ci-robot ***@***.***> wrote:
@filmil <https://github.com/filmil>: The following test *failed*, say
/retest to rerun them all:
Test name Commit Details Rerun command
pull-kubernetes-verify bc092ee
<bc092ee>
link
<https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/pr-logs/pull/63213/pull-kubernetes-verify/88867/> /test
pull-kubernetes-verify
Full PR test history <https://k8s-gubernator.appspot.com/pr/63213>. Your
PR dashboard <https://k8s-gubernator.appspot.com/pr/filmil>. Please help
us cut down on flakes by linking to
<https://git.k8s.io/community/contributors/devel/flaky-tests.md#filing-issues-for-flaky-tests>
an open issue
<https://github.com/kubernetes/kubernetes/issues?q=is:issue+is:open> when
you hit one in your PR.
Instructions for interacting with me using PR comments are available here
<https://git.k8s.io/community/contributors/devel/pull-requests.md>. If
you have questions or suggestions related to my behavior, please file an
issue against the kubernetes/test-infra
<https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:>
repository. I understand the commands that are listed here
<https://go.k8s.io/bot-commands>.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#63213 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AAPDMKLi_JXnn-LDi1Ss_Ma3fXt0HxP7ks5tuLfDgaJpZM4TnR-->
.
|
/retest
…On Tue, May 1, 2018 at 1:53 PM k8s-ci-robot ***@***.***> wrote:
@filmil <https://github.com/filmil>: The following test *failed*, say
/retest to rerun them all:
Test name Commit Details Rerun command
pull-kubernetes-verify bc092ee
<bc092ee>
link
<https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/pr-logs/pull/63213/pull-kubernetes-verify/88884/> /test
pull-kubernetes-verify
Full PR test history <https://k8s-gubernator.appspot.com/pr/63213>. Your
PR dashboard <https://k8s-gubernator.appspot.com/pr/filmil>. Please help
us cut down on flakes by linking to
<https://git.k8s.io/community/contributors/devel/flaky-tests.md#filing-issues-for-flaky-tests>
an open issue
<https://github.com/kubernetes/kubernetes/issues?q=is:issue+is:open> when
you hit one in your PR.
Instructions for interacting with me using PR comments are available here
<https://git.k8s.io/community/contributors/devel/pull-requests.md>. If
you have questions or suggestions related to my behavior, please file an
issue against the kubernetes/test-infra
<https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:>
repository. I understand the commands that are listed here
<https://go.k8s.io/bot-commands>.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#63213 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AAPDMP0hBfthp7sdxZemytvzQkc9dfeRks5tuMtFgaJpZM4TnR-->
.
|
A distributed claim allows the OIDC provider to delegate a claim to a separate URL. Distributed claims are of the form as seen below, and are defined in the OIDC Connect Core 1.0, section 5.6.2. See: https://openid.net/specs/openid-connect-core-1_0.html#AggregatedDistributedClaims Example claim: ``` { ... (other normal claims)... "_claim_names": { "groups": "src1" }, "_claim_sources": { "src1": { "endpoint": "https://www.example.com", "access_token": "f005ba11" }, }, } ``` Example response to a followup request to https://www.example.com is a JWT-encoded claim token: ``` { "iss": "https://www.example.com", "aud": "my-client", "groups": ["team1", "team2"], "exp": 9876543210 } ``` Apart from the indirection, the distributed claim behaves exactly the same as a standard claim. For Kubernetes, this means that the token must be verified using the same approach as for the original OIDC token. This requires the presence of "iss", "aud" and "exp" claims in addition to "groups". All existing OIDC options (e.g. groups prefix) apply. Any claim can be made distributed, even though the "groups" claim is the primary use case. Allows groups to be a single string due to kubernetes#33290, even though OIDC defines "groups" claim to be an array of strings. So, this will be parsed correctly: ``` { "iss": "https://www.example.com", "aud": "my-client", "groups": "team1", "exp": 9876543210 } ``` Expects that distributed claims endpoints return JWT, per OIDC specs. In case both a standard and a distributed claim with the same name exist, standard claim wins. The specs seem undecided about the correct approach here. Distributed claims are resolved serially. This could be parallelized for performance if needed. Aggregated claims are silently skipped. Support could be added if needed.
/retest Let's try one more time in case this is some kind of a flake. If it repeats, I'll try to dig in. @ericchiang any advice? For the record, the only thing I did above was to rebase on top of master to verify that there are no merge conflict. That removed the lgtm, however. Would you be able to help. |
No great advise for the test flakes, but rebasing removing the LGTM is expected. I can re-apply the label. /lgtm |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: ericchiang, filmil The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/test pull-kubernetes-verify
…On Wed, May 2, 2018 at 1:38 PM k8s-ci-robot ***@***.***> wrote:
@filmil <https://github.com/filmil>: The following test *failed*, say
/retest to rerun them all:
Test name Commit Details Rerun command
pull-kubernetes-verify dfb5278
<dfb5278>
link
<https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/pr-logs/pull/63213/pull-kubernetes-verify/89039/> /test
pull-kubernetes-verify
Full PR test history <https://k8s-gubernator.appspot.com/pr/63213>. Your
PR dashboard <https://k8s-gubernator.appspot.com/pr/filmil>. Please help
us cut down on flakes by linking to
<https://git.k8s.io/community/contributors/devel/flaky-tests.md#filing-issues-for-flaky-tests>
an open issue
<https://github.com/kubernetes/kubernetes/issues?q=is:issue+is:open> when
you hit one in your PR.
Instructions for interacting with me using PR comments are available here
<https://git.k8s.io/community/contributors/devel/pull-requests.md>. If
you have questions or suggestions related to my behavior, please file an
issue against the kubernetes/test-infra
<https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:>
repository. I understand the commands that are listed here
<https://go.k8s.io/bot-commands>.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#63213 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AAPDMC5jfuPO5EnxecWdgKkf5zLBgkK3ks5tuhlKgaJpZM4TnR-->
.
|
/test pull-kubernetes-verify
:/
…On Wed, May 2, 2018 at 2:58 PM k8s-ci-robot ***@***.***> wrote:
@filmil <https://github.com/filmil>: The following test *failed*, say
/retest to rerun them all:
Test name Commit Details Rerun command
pull-kubernetes-verify dfb5278
<dfb5278>
link
<https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/pr-logs/pull/63213/pull-kubernetes-verify/89054/> /test
pull-kubernetes-verify
Full PR test history <https://k8s-gubernator.appspot.com/pr/63213>. Your
PR dashboard <https://k8s-gubernator.appspot.com/pr/filmil>. Please help
us cut down on flakes by linking to
<https://git.k8s.io/community/contributors/devel/flaky-tests.md#filing-issues-for-flaky-tests>
an open issue
<https://github.com/kubernetes/kubernetes/issues?q=is:issue+is:open> when
you hit one in your PR.
Instructions for interacting with me using PR comments are available here
<https://git.k8s.io/community/contributors/devel/pull-requests.md>. If
you have questions or suggestions related to my behavior, please file an
issue against the kubernetes/test-infra
<https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:>
repository. I understand the commands that are listed here
<https://go.k8s.io/bot-commands>.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#63213 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AAPDMLMLAdWr2EWmv67QKLSl2pErqgHEks5tuiwHgaJpZM4TnR-->
.
|
/retest Review the full test history for this PR. Silence the bot with an |
/test all [submit-queue is verifying that this PR is safe to merge] |
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions here. |
Next step to enable this feature is to enable claim caching.
A distributed claim allows the OIDC provider to delegate a claim to a
separate URL. Distributed claims are of the form as seen below, and are
defined in the OIDC Connect Core 1.0, section 5.6.2.
See: https://openid.net/specs/openid-connect-core-1_0.html#AggregatedDistributedClaims
Example claim:
Example response to a followup request to https://www.example.com is a
JWT-encoded claim token:
Apart from the indirection, the distributed claim behaves exactly
the same as a standard claim. For Kubernetes, this means that the
token must be verified using the same approach as for the original OIDC
token. This requires the presence of "iss", "aud" and "exp" claims in
addition to "groups".
All existing OIDC options (e.g. groups prefix) apply.
Any claim can be made distributed, even though the "groups" claim is
the primary use case.
Allows groups to be a single string due to
#33290, even though
OIDC defines "groups" claim to be an array of strings. So, this will
be parsed correctly:
Expects that distributed claims endpoints return JWT, per OIDC specs.
In case both a standard and a distributed claim with the same name
exist, standard claim wins. The specs seem undecided about the correct
approach here.
Distributed claims are resolved serially. This could be parallelized
for performance if needed.
Aggregated claims are silently skipped. Support could be added if
needed.
What this PR does / why we need it: Makes it possible to retrieve many group memberships by offloading to a dedicated backend for groups resolution.
Which issue(s) this PR fixes (optional, in
fixes #<issue number>(, fixes #<issue_number>, ...)
format, will close the issue(s) when PR gets merged):Fixes #62920
Special notes for your reviewer:
There are a few TODOs that seem better handled in separate commits.
Release note: