Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implements OIDC distributed claims. #63213

Merged
merged 1 commit into from May 3, 2018

Conversation

filmil
Copy link
Contributor

@filmil filmil commented Apr 26, 2018

Next step to enable this feature is to enable claim caching.

A distributed claim allows the OIDC provider to delegate a claim to a
separate URL. Distributed claims are of the form as seen below, and are
defined in the OIDC Connect Core 1.0, section 5.6.2.

See: https://openid.net/specs/openid-connect-core-1_0.html#AggregatedDistributedClaims

Example claim:

{
  ... (other normal claims)...
  "_claim_names": {
    "groups": "src1"
  },
  "_claim_sources": {
    "src1": {
      "endpoint": "https://www.example.com",
      "access_token": "f005ba11"
    },
  },
}

Example response to a followup request to https://www.example.com is a
JWT-encoded claim token:

{
  "iss": "https://www.example.com",
  "aud": "my-client",
  "groups": ["team1", "team2"],
  "exp": 9876543210
}

Apart from the indirection, the distributed claim behaves exactly
the same as a standard claim. For Kubernetes, this means that the
token must be verified using the same approach as for the original OIDC
token. This requires the presence of "iss", "aud" and "exp" claims in
addition to "groups".

All existing OIDC options (e.g. groups prefix) apply.

Any claim can be made distributed, even though the "groups" claim is
the primary use case.

Allows groups to be a single string due to
#33290, even though
OIDC defines "groups" claim to be an array of strings. So, this will
be parsed correctly:

{
  "iss": "https://www.example.com",
  "aud": "my-client",
  "groups": "team1",
  "exp": 9876543210
}

Expects that distributed claims endpoints return JWT, per OIDC specs.

In case both a standard and a distributed claim with the same name
exist, standard claim wins. The specs seem undecided about the correct
approach here.

Distributed claims are resolved serially. This could be parallelized
for performance if needed.

Aggregated claims are silently skipped. Support could be added if
needed.

What this PR does / why we need it: Makes it possible to retrieve many group memberships by offloading to a dedicated backend for groups resolution.

Which issue(s) this PR fixes (optional, in fixes #<issue number>(, fixes #<issue_number>, ...) format, will close the issue(s) when PR gets merged):
Fixes #62920

Special notes for your reviewer:
There are a few TODOs that seem better handled in separate commits.

Release note:

Lays groundwork for OIDC distributed claims handling in the apiserver authentication token checker.

A distributed claim allows the OIDC provider to delegate a claim to a
separate URL.  Distributed claims are of the form as seen below, and are
defined in the OIDC Connect Core 1.0, section 5.6.2.

For details, see: 
http://openid.net/specs/openid-connect-core-1_0.html#AggregatedDistributedClaims

@k8s-ci-robot k8s-ci-robot added size/XL Denotes a PR that changes 500-999 lines, ignoring generated files. do-not-merge/release-note-label-needed Indicates that a PR should not merge because it's missing one of the release note labels. needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. labels Apr 26, 2018
@filmil
Copy link
Contributor Author

filmil commented Apr 26, 2018

This is to handle: #62920

@k8s-ci-robot k8s-ci-robot added release-note Denotes a PR that will be considered when it comes time to generate release notes. and removed do-not-merge/release-note-label-needed Indicates that a PR should not merge because it's missing one of the release note labels. labels Apr 26, 2018
@mikedanese mikedanese self-assigned this Apr 26, 2018
@k8s-ci-robot k8s-ci-robot added the cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. label Apr 26, 2018
//
// Example:
// "groups" -> []string{"http://example1.com/foo","http://example2.com/bar"}
IssuersPerClaim map[string][]string
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why shouldn't the issuer decide rather than the k8s oidc client? what does this solve?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm also for not adding new APIs.

Copy link
Contributor Author

@filmil filmil Apr 26, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Two benefits, IMHO:

  • Makes an explicit guarantee to the cluster owner that only issuers they explicitly approve of will ever be called to resolve distributed claims. Note that it's not a decision by the oidc client; but a restriction on the acceptable values.
  • Makes it possible for us to initialize the ID token provider ahead of time.

Copy link
Contributor Author

@filmil filmil Apr 26, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ericchiang could you please clarify your comment? What new APIs are being added? Perhaps more importantly, what would you like to see instead?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What new APIs are being added?

New flags.

Makes an explicit guarantee to the cluster owner that only issuers they explicitly approve of will ever be called to resolve distributed claims. Note that it's not a decision by the oidc client; but a restriction on the acceptable values.

Users already put a lot of trust in their primary ID Token issuer. I feel like the benefit is small compared to adding more configuration nobs.

Let me think about this a little.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thinking this this a little more, the only distributed claims we should allow are the username
and groups claim (edit: or maybe even only allow it for the groups claim).

Could you explain why you think this is the case? (e.g. for example, only nonstandard claims are allowed to be distributed?)

I would not expect a distributed claim to be able to provide the expiry, for example.

Hm, I have a slightly different view, though I'm not an expert in the matter. It seems that a distributed claim response must provide the expiry.

Otherwise, it becomes similar to a long-lived credential, which could be stolen and replayed. With a timestamp, whatever is examining the credential has a way to determine if the credential is fresh or not and therefore if it can believe the claims. This seems like quite an important requirement from a security perspective.

If we're worried about what remote addresses we can consult for this maybe we whitelist issuers?

That would work.

However, when would an issuer provide a distributed claim that an admin wouldn't be comfortable with? If the admin is picking a username or group claim they're surely aware of if that claim can be distributed and where it'd be distributed to.

I think from an auditability perspective it is much stronger to claim "we know exactly what calls our API server is making" than "our API server may be redirected to wherever our IDP says".

But if you think that's not a big concern, I'll be happy to oblige.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would not expect a distributed claim to be able to provide the expiry, for example.

Hm, I have a slightly different view, though I'm not an expert in the matter. It seems that a distributed claim response must provide the expiry.

I think you're talking about two different things. First, we verify the presented id_token, including its iss, exp, nbf claims. That is the exp claim Eric was saying he wouldn't expect to be able to be distributed.

Once verified, we look for the username and group claims. Those could be distributed, and if they are, according to the spec, the response from the distributed endpoint is another JWT that must be verified (including iss, and potentially its own exp, and nbf claims)

Copy link
Contributor Author

@filmil filmil Apr 27, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

First, we verify the presented id_token, including its iss, exp, nbf claims. That is the exp claim Eric was saying he wouldn't expect to be able to be distributed.

Ah I see. Indeed, not all claims may be named in _claim_names. In this implementation, a normal claim always wins if there is a claim name clash, which means exp will never be resolved through _claim_sources. But, let's make that statement even stronger, then.

To summarize my understanding of what would be acceptable to you:

  • Only allow groups claim to be distributed (that is, whatever the value has been set in the OIDC configuration to mean groups).
  • Require admins to specify --oidc-allowed-distributed-claims-issuer to list all acceptable issuers.
  • Introduce --oidc-disallow-distributed-claims as an escape hatch.

What do you think?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Only allow groups claim to be distributed (that is, whatever the value has been set in the OIDC configuration to mean groups).

+1

Require admins to specify --oidc-allowed-distributed-claims-issuer to list all acceptable issuers.

-1

I still think that admins are actually more tolerant of requests to different sites than you expect. accounts.google.com returns a keys URL hosted under www.googleapis.com, for example.

(edit: this is also a restriction we could add later in a backwards compatible way)

Introduce --oidc-disallow-distributed-claims as an escape hatch.

I'd like to see someone request this behavior first.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK, I scaled back the implementation to support only groups claim resolution through distributed claims. Also removed all flags and options. Better?

@mikedanese
Copy link
Member

@kubernetes/sig-auth-pr-reviews

@k8s-ci-robot k8s-ci-robot added the sig/auth Categorizes an issue or PR as relevant to SIG Auth. label Apr 26, 2018
Copy link
Contributor

@awly awly left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A bunch of stylistic comments but deferring to Eric/Mike on the overall design

@@ -230,6 +242,9 @@ func (s *BuiltInAuthenticationOptions) AddFlags(fs *pflag.FlagSet) {
"A key=value pair that describes a required claim in the ID Token. "+
"If set, the claim is verified to be present in the ID Token with a matching value. "+
"Repeat this flag to specify multiple claims.")

fs.Var(flag.NewColonSeparatedMultimapStringString(&s.OIDC.IssuersPerClaim), "oidc-distributed-claims-issuers-per-claim", ""+
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remove ""+ at the end

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

@@ -8,6 +8,7 @@ load(

go_test(
name = "go_default_test",
size = "small",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Indent with spaces (I think /hack/update-bazel.sh should reformat this)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hm, on a second thought, let's revert this change altogether.

}

// initVerifier creates a new ID token verifier for the given configuration and issuer URL. On success, calls setVerifier with the
// resulting verifier. Returns true in case of success, or non-nil error in case of an irrecoverable error.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The last sentence is redundant.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

// Polls indefinitely in an attemt to initialize the distributed claims
// verifier, or until context canceled.
sync := make(chan struct{})
initFn := func() (done bool, err error) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

err is always nil. Should this be func() bool ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It has to conform to wait.PollUntil API.

return true, nil

}
go func() {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why not simply go wait.PollUntil(time.Second*10, initFn, ctx.Done()) ?

Copy link
Contributor Author

@filmil filmil Apr 26, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

wait.PollUntil waits for 10 seconds before it executes initFn. I'd prefer to not have to wait if possible. The package wait does not seem to have a function that both has a done channel and executes first before pause, as far as I could see.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see. You could use wait.PollImmediateInfiite and move return ctx.Err() from initFn to catch cancelled context

return fmt.Errorf("oidc: no claim sources")
}

var sources claims
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can this be map[string]endpoint so you don't have to unmarshal endpoints in the loop below?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Lo and behold, it can!

Done. :)

defer cancel()

// TODO: Allow passing request body with configurable information.
req, _ := http.NewRequest("GET", url, nil)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

handle the error

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

if accessToken != "" {
req.Header.Set("Authorization", fmt.Sprintf("Bearer %v", accessToken))
}
req.WithContext(ctx)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

req = req.WithContext(ctx)

response, err := client.Do(req)
if err != nil {
return "", err
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

defer response.Body.Close()

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done.

}()

// Let tests wait until the verifier is initialized.
if synchronizeVerifierForTest {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, I think you can skip the channel and just call wait.PollUntil without a goroutine if the flag is true.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, there are multiple ways of achieving this effect. I opted for a way that minimally modifies the execution path in and out of test.

@ericchiang
Copy link
Contributor

/ok-to-test

@k8s-ci-robot k8s-ci-robot removed the needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. label Apr 26, 2018
return "", err
}
responseBytes, err := ioutil.ReadAll(response.Body)
defer response.Body.Close()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Move this before ReadAll. It won't make Reads fail

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there any specific reason to prefer that idiom?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It follows the general pattern of

r, err := allocateResource()
if err != nil { return err }
defer cleanupResource(r)

If you keep it as is, every reader will think there's some non-obvious semantics and spend time trying to understand the reasoning behind it.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, I see. I was thinking more along the lines of "the first use of response.Body is in line N". So it made sense to defer close in line N+1.

Whereas, in fact, the resource was allocated at ... := client.Do(...).

@filmil
Copy link
Contributor Author

filmil commented Apr 27, 2018 via email

@filmil
Copy link
Contributor Author

filmil commented Apr 27, 2018

/retest

@filmil filmil force-pushed the oidc-dist-claims branch 3 times, most recently from 85c0d8d to 8382197 Compare April 27, 2018 08:02
@filmil
Copy link
Contributor Author

filmil commented Apr 27, 2018 via email

)

if glog.V(5) {
glog.Infof("initial claim set: %v", c)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is going to be really verbose... Just print the distributed claim you find in the resolve() method?

glog.Infof("resolved distributed claim %s from endpoint %s: %v", claimName, endpoint, claimValue)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also if you print all the claims you'll print endpoint access tokens.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I meant this to be logged only in tests, as they give insight into what happened. Let's remove since it's contentious.

claim string

// A HTTP transport to use for distributed claims
t *http.Transport
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: this should be an http.Client

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

if av == nil {
// This lazy init should normally be very quick.
client := &http.Client{Transport: r.t, Timeout: 30 * time.Second}
ctx := oidc.ClientContext(context.Background(), client)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: would be nice if this context was cancel-able.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hm, there isn't any place to cancel it from. Any suggestions?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

adding a TODO is fine for now

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

// resolve requests distributed claims from all endpoints passed in,
// and inserts the lookup results into allClaims.
func (r *claimResolver) resolve(endpoint endpoint, allClaims claims) error {
// TODO: cache resolved claims.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd think this has to be resolved before we turn on this functionality. These tokens are evaluated on every HTTP request. It's going to be incredibly slow if we leave this as a TODO.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed, but it seemed like it would better be placed in a separate commit. What do you advise? I'm fine with adding more code to this commit, but it's already large as it is.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add a private field to the config to let the tests enable this feature, and disable it by default?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good idea! Done.

w.Header().Set("Content-Type", "application/json")
glog.V(5).Infof("%v: returning: %+v", r.URL, *openIDConfig)
w.Write([]byte(*openIDConfig))
default:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: default should be a 404 with an error

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

}
v, err := r.Verifier(untrustedIss)
if err != nil {
glog.Errorf("verifier: %v", err)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it'd be better to return an error if you can't reach the distributed claim provider. Users are going to get weird authorization errors otherwise. But I'd be fine if we want to keep it this way.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

}
value, ok := distClaims[r.claim]
if !ok {
return fmt.Errorf("could not find distributed claim: %v", r.claim)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We're going to live and die by good error messages here :P

fmt.Errorf("jwt returned by distributed claim endpoint %s did not contain claim: %v", endpoint, r.claim)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed. Changed.

ctx, cancel := context.WithCancel(context.Background())
defer cancel()

// TODO: Allow passing request body with configurable information.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this actually part of the spec?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Azure OIDC IDP supports this. IIUC the spec says nothing either way.

}
ep, ok := sources[src]
if !ok {
return fmt.Errorf("oidc: malformed claim sources")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fmt.Errorf("id token _claim_names contained a source %s missing in _claim_sources", src)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

@@ -381,3 +684,11 @@ func (c claims) hasClaim(name string) bool {
}
return true
}

func (c claims) String() string {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think we should ever print all the claims. It might contain sensitive information like access tokens.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

@k8s-ci-robot k8s-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label May 1, 2018
@@ -104,6 +112,73 @@ type Options struct {

// now is used for testing. It defaults to time.Now.
now func() time.Time

// TODO: remove this field once caching is implemented.
enableDistributedClaims bool
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

woops, you need to remove this

@ericchiang
Copy link
Contributor

/hold

@k8s-ci-robot k8s-ci-robot added the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label May 1, 2018
@k8s-ci-robot k8s-ci-robot removed the lgtm "Looks good to me", indicates that a PR is ready to be merged. label May 1, 2018
@filmil
Copy link
Contributor Author

filmil commented May 1, 2018

Ow! Good catch. I do remember removing it, perhaps at some point I hit "undo" one too many times.

Please take another look @ericchiang . Sorry for the ruckus.

@ericchiang
Copy link
Contributor

/lgtm
/hold remove

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label May 1, 2018
@ericchiang
Copy link
Contributor

/hold cancel

@k8s-ci-robot k8s-ci-robot removed the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label May 1, 2018
@filmil
Copy link
Contributor Author

filmil commented May 1, 2018 via email

@filmil
Copy link
Contributor Author

filmil commented May 1, 2018 via email

A distributed claim allows the OIDC provider to delegate a claim to a
separate URL.  Distributed claims are of the form as seen below, and are
defined in the OIDC Connect Core 1.0, section 5.6.2.

See: https://openid.net/specs/openid-connect-core-1_0.html#AggregatedDistributedClaims

Example claim:

```
{
  ... (other normal claims)...
  "_claim_names": {
    "groups": "src1"
  },
  "_claim_sources": {
    "src1": {
      "endpoint": "https://www.example.com",
      "access_token": "f005ba11"
    },
  },
}
```

Example response to a followup request to https://www.example.com is a
JWT-encoded claim token:

```
{
  "iss": "https://www.example.com",
  "aud": "my-client",
  "groups": ["team1", "team2"],
  "exp": 9876543210
}
```

Apart from the indirection, the distributed claim behaves exactly
the same as a standard claim.  For Kubernetes, this means that the
token must be verified using the same approach as for the original OIDC
token.  This requires the presence of "iss", "aud" and "exp" claims in
addition to "groups".

All existing OIDC options (e.g. groups prefix) apply.

Any claim can be made distributed, even though the "groups" claim is
the primary use case.

Allows groups to be a single string due to
kubernetes#33290, even though
OIDC defines "groups" claim to be an array of strings. So, this will
be parsed correctly:

```
{
  "iss": "https://www.example.com",
  "aud": "my-client",
  "groups": "team1",
  "exp": 9876543210
}
```

Expects that distributed claims endpoints return JWT, per OIDC specs.

In case both a standard and a distributed claim with the same name
exist, standard claim wins.  The specs seem undecided about the correct
approach here.

Distributed claims are resolved serially.  This could be parallelized
for performance if needed.

Aggregated claims are silently skipped.  Support could be added if
needed.
@k8s-ci-robot k8s-ci-robot removed the lgtm "Looks good to me", indicates that a PR is ready to be merged. label May 2, 2018
@filmil
Copy link
Contributor Author

filmil commented May 2, 2018

/retest

Let's try one more time in case this is some kind of a flake. If it repeats, I'll try to dig in.

@ericchiang any advice? For the record, the only thing I did above was to rebase on top of master to verify that there are no merge conflict. That removed the lgtm, however. Would you be able to help.

@ericchiang
Copy link
Contributor

No great advise for the test flakes, but rebasing removing the LGTM is expected. I can re-apply the label.

/lgtm

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label May 2, 2018
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: ericchiang, filmil

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@filmil
Copy link
Contributor Author

filmil commented May 2, 2018 via email

@filmil
Copy link
Contributor Author

filmil commented May 2, 2018 via email

@ericchiang
Copy link
Contributor

@filmil #63378

@fejta-bot
Copy link

/retest
This bot automatically retries jobs that failed/flaked on approved PRs (send feedback to fejta).

Review the full test history for this PR.

Silence the bot with an /lgtm cancel comment for consistent failures.

@k8s-github-robot
Copy link

/test all [submit-queue is verifying that this PR is safe to merge]

@k8s-github-robot
Copy link

Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. lgtm "Looks good to me", indicates that a PR is ready to be merged. release-note Denotes a PR that will be considered when it comes time to generate release notes. sig/auth Categorizes an issue or PR as relevant to SIG Auth. size/XL Denotes a PR that changes 500-999 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Support OIDC distributed claims for group resolution in the K8S apiserver OIDC token checker
8 participants