-
Notifications
You must be signed in to change notification settings - Fork 39k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Run rbac authorizer from cache #34047
Run rbac authorizer from cache #34047
Conversation
@cjcullen fyi. You have dealt with some authorizer caching issues and might have suggestions. |
This mirrors the implementation in OpenShift. The cache isn't like the web hook where you're stuck for minutes. This usually updates sub-second off the watch. I will open a separate pull to add |
87274f4
to
e7670a7
Compare
@ncdc give it a scan? |
e7670a7
to
e1638f1
Compare
} | ||
client, err := s.NewSelfClient(privilegedLoopbackToken) | ||
if err != nil { | ||
glog.Errorf("Failed to create clientset: %v", err) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I know this is just a code move, but shouldn't this be fatal instead of error?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I know this is just a code move, but shouldn't this be fatal instead of error?
I'm not ready to do it yet because the wiring in test-integration
doesn't allow this code to succeed (didn't before, doesn't now). Once we fix test-integration
to use a "normal" startup flow, yes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, thanks. Do we have an issue about this so we don't forget to fix it?
} | ||
client, err := s.NewSelfClient(privilegedLoopbackToken) | ||
if err != nil { | ||
glog.Errorf("Failed to create clientset: %v", err) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fatal?
} | ||
if modeEnabled(genericoptions.ModeRBAC) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This check is no longer needed?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This check is no longer needed?
Right, the informers are only activated if they are needed and so the net result is the same.
@deads2k reviewed. A couple of questions. |
Sure: #34728 . There's some call swizzling that I need @smarterclayton's help with to be able to start doing it. |
@k8s-bot test this [submit-queue is verifying that this PR is safe to merge] |
Automatic merge from submit-queue |
This PR broke scalability test, by significantly increasing controller-manager usage, e.g.: I'm reverting this change to unblock merge queue. |
So we still don't have kubemark as submit queue blocking? cc @eparis |
@ncdc - we do have small kubemark blocking. I don't know why it didn't block the merge: There is kubemark suite in the required suites here. |
@ncdc - OK, I think I understand this one. We have miscofiguration of the per-PR job and we are not checking the resource usage in it. That's why we didn't catch this in presubmit, but only in post-submit. I will send out a PR to fix in in few minutes. |
RBAC authorization can be run very effectively out of a cache. The cache is a normal reflector backed cache (shared informer).
I've split this into three parts:
@liggitt @ericchiang @kubernetes/sig-auth
This change is![Reviewable](https://camo.githubusercontent.com/2d899f4291d07d3cd2fa4aaae1e3b243f164c23fce87d30a589ace0d496a444c/68747470733a2f2f72657669657761626c652e6b756265726e657465732e696f2f7265766965775f627574746f6e2e737667)