Skip to content

Comments

Project request quota admission control#6768

Merged
openshift-bot merged 1 commit intoopenshift:masterfrom
csrwng:projectreq_limit
Jan 28, 2016
Merged

Project request quota admission control#6768
openshift-bot merged 1 commit intoopenshift:masterfrom
csrwng:projectreq_limit

Conversation

@csrwng
Copy link
Contributor

@csrwng csrwng commented Jan 21, 2016

Adds admission control plugin to control quota per plugin configuration.

@csrwng
Copy link
Contributor Author

csrwng commented Jan 21, 2016

@deads2k @liggitt ptal

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this going to be shared?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is... same cache used in the other project admission control plugins

@csrwng
Copy link
Contributor Author

csrwng commented Jan 21, 2016

updated package name

@csrwng
Copy link
Contributor Author

csrwng commented Jan 21, 2016

[test]

@deads2k
Copy link
Contributor

deads2k commented Jan 22, 2016

unit test failures

@liggitt
Copy link
Contributor

liggitt commented Jan 22, 2016

@deads2k I thought we were wondering if we even needed this. Did you resolve that we did?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Don't you need to find all the limits that apply and take the min (or the max, not sure which direction we'd go)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure that we have a use case for that or that selectors are going to be that complex. Right now it's relying on the order of selectors in the config and, for what we want this for, imho it's sufficient.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If I apply a limit based on label selection, I'd be surprised if it didn't take effect. I think we should pick a direction (either max or min of matching limits), iterate to determine that limit, and document which direction we chose. I think I would tend toward taking the max (just like policy... you can do something if any of your roles allows it).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If I apply a limit based on label selection, I'd be surprised if it didn't take effect. I think we should pick a direction (either max or min of matching limits), iterate to determine that limit, and document which direction we chose. I think I would tend toward taking the max (just like policy... you can do anything allowed by any policy grant).

I prefer first one wins. Otherwise we'll pick a direction and not like it. I think its more likely that I know which labels have priority over which other labels and build my config file that way.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure that picking the direction would alleviate the problem of having a limit not apply. Suppose we say max wins, and we have a very general rule that gives you a max of 100, and we add a more specific selector with a max of 5. Your more specific rule would never apply. Suppose we say min wins... then we can't easily configure something like the online use case where we want the default to be 1, but only for certain users increase that limit to 100. The more selective limit would never apply.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I prefer first one wins. Otherwise we'll pick a direction and not like it. I think its more likely that I know which labels have priority over which other labels and build my config file that way.

I could buy that. It would also let you restrict specific users while letting everyone else have a higher limit.

@liggitt
Copy link
Contributor

liggitt commented Jan 22, 2016

Also, this doesn't solve the issue with a lagging store allowing a few requests through until it catches up

@csrwng
Copy link
Contributor Author

csrwng commented Jan 22, 2016

Also, this doesn't solve the issue with a lagging store allowing a few requests through until it catches up

Can you think of a way to mitigate this?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

isn't the yaml for this level: platinum?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed. Oddly it worked with either form.

@liggitt
Copy link
Contributor

liggitt commented Jan 22, 2016

Can you think of a way to mitigate this?

Not without recording the update somewhere that would generate a conflict (like an annotation on the user), and having a reconciling controller to sweep and fix the annotation if it got out of sync (like resource quota)... not sure I like something that complex for this

@csrwng
Copy link
Contributor Author

csrwng commented Jan 22, 2016

Added requester index to project cache

@deads2k
Copy link
Contributor

deads2k commented Jan 22, 2016

@deads2k I thought we were wondering if we even needed this. Did you resolve that we did?

We could work around it, but since this is has general utility (its commonly requested), useful for us, and relatively small it seemed better to bring it in.

@csrwng
Copy link
Contributor Author

csrwng commented Jan 22, 2016

[test]

@smarterclayton
Copy link
Contributor

You could require an atomic mutation on the user in order to allow it. You
could store the request into etcd, and then have a controller delete the
request when it satisfies it (doesn't limit the number of requests you can
create though). All the project requesters could mutate a single object
which limits to one inflight per user at a time. The last is probably
worth considering - it's effectively a lock on project creations, and then
we just have to decide whether we want to rate limit project creations on
the cluster in general.

On Fri, Jan 22, 2016 at 10:20 AM, Cesar Wong notifications@github.com
wrote:

[test]


Reply to this email directly or view it on GitHub
#6768 (comment).

@smarterclayton
Copy link
Contributor

I'm tempted though to say that since we're paying the price of the project
cache already the squeaking through doesn't bother me - it just means that
you have to wait post creation and ensure that the resource version
returned by the write of the project matches the project cache, and check
your quota again. You can still burst across the cluster, but done right
you can parallelize to max number of masters.

On Fri, Jan 22, 2016 at 11:29 AM, Clayton Coleman ccoleman@redhat.com
wrote:

You could require an atomic mutation on the user in order to allow it.
You could store the request into etcd, and then have a controller delete
the request when it satisfies it (doesn't limit the number of requests you
can create though). All the project requesters could mutate a single
object which limits to one inflight per user at a time. The last is
probably worth considering - it's effectively a lock on project creations,
and then we just have to decide whether we want to rate limit project
creations on the cluster in general.

On Fri, Jan 22, 2016 at 10:20 AM, Cesar Wong notifications@github.com
wrote:

[test]


Reply to this email directly or view it on GitHub
#6768 (comment).

@csrwng
Copy link
Contributor Author

csrwng commented Jan 22, 2016

it just means that
you have to wait post creation and ensure that the resource version
returned by the write of the project matches the project cache, and check
your quota again.

@smarterclayton how can I do this?

@smarterclayton
Copy link
Contributor

Since we don't have post admission control yet (as @derekwaynecarr has floated) the current solution is fine, assuming a) the leak window is minimal even if an attacker is blowing up the server and b) you have a corresponding controller that goes and fixes up projects that are out of band (i.e. the oldest projects are set to zero quota). The controller can be post 3.2, but it's essentially quota reconciliation and needs to tie in with the proposed organizational quota constraint.

@csrwng
Copy link
Contributor Author

csrwng commented Jan 22, 2016

Sounds good to me

@derekwaynecarr
Copy link
Member

Do we ever plan on letting users organize their projects via labels? If so, would we basically have to no longer treat a Project as a virtual resource? Or would we store the project labels as annotations on the Namespace and then update the strategy code to pull a Project label from the annotation instead of the normal location? I guess the latter would make sense but it would be nice to have a plan...

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Glad we went with this approach. :-)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like it too :) Allowed us to remove the embarrassing hack that merged the openshift and kube clients. Thank you for suggesting it.

@csrwng
Copy link
Contributor Author

csrwng commented Jan 25, 2016

@deads2k @liggitt @smarterclayton Is the way that this plugin is configured (via user labels) ok? Do you have any suggestions for changes? Writing doc for it :-)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add a TODO. This should go away once we have the yaml serializer in the rebase after this one.

@deads2k
Copy link
Contributor

deads2k commented Jan 25, 2016

@deads2k @liggitt @smarterclayton Is the way that this plugin is configured (via user labels) ok?

I like using labels on users for it.

@csrwng
Copy link
Contributor Author

csrwng commented Jan 25, 2016

Addressed comments so far

@csrwng
Copy link
Contributor Author

csrwng commented Jan 26, 2016

@ironcladlou ptal

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

json tags instead of yaml.

@csrwng
Copy link
Contributor Author

csrwng commented Jan 26, 2016

struct tags fixed

@csrwng
Copy link
Contributor Author

csrwng commented Jan 26, 2016

Test error seems like a new error from the vagrant plugin:

Running TEST_ASSETS=true TEST_ASSETS_HEADLESS=true make check -j --output-sync=recurse...
make: unrecognized option '--output-sync=recurse'

@deads2k
Copy link
Contributor

deads2k commented Jan 26, 2016

Test error seems like a new error from the vagrant plugin:

@smarterclayton did you turn on parallel? the RHEL images don't have a recent level of make

@deads2k
Copy link
Contributor

deads2k commented Jan 27, 2016

lgtm

@csrwng
Copy link
Contributor Author

csrwng commented Jan 27, 2016

looks like an etcd flake:

2016-01-27 11:19:48.486493 E | etcdserver: publish error: etcdserver: request timed out

[merge]

@csrwng
Copy link
Contributor Author

csrwng commented Jan 27, 2016

Needs rebase on top of Kube rebase

@openshift-bot
Copy link
Contributor

Evaluated for origin test up to 4391347

@openshift-bot
Copy link
Contributor

continuous-integration/openshift-jenkins/test SUCCESS (https://ci.openshift.redhat.com/jenkins/job/test_pr_origin/513/)

@openshift-bot
Copy link
Contributor

continuous-integration/openshift-jenkins/merge SUCCESS (https://ci.openshift.redhat.com/jenkins/job/merge_pull_requests_origin/4768/) (Image: devenv-rhel7_3269)

@csrwng
Copy link
Contributor Author

csrwng commented Jan 28, 2016

[merge]

@openshift-bot
Copy link
Contributor

Evaluated for origin merge up to 4391347

openshift-bot pushed a commit that referenced this pull request Jan 28, 2016
@openshift-bot openshift-bot merged commit 86a15eb into openshift:master Jan 28, 2016
@csrwng csrwng deleted the projectreq_limit branch February 1, 2016 18:09
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants