-
Notifications
You must be signed in to change notification settings - Fork 772
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Make webhooks configurable #1981
Comments
Pasting an idea here from Kubernetes slack: https://kubernetes.slack.com/archives/CLGR9BJU9/p1623155044256500 Instead of letting users figure out the correct filter, why not continuously autogenerate it in the kyverno controller based on the currently existing policies? This ensure the filter is continuously optimal. More specifically:
TLDR, implementing just the first bullet would be a huge improvement over the current state. |
I also support @yanniszark idea in the above comment. The webhook should be dynamically composed based upon the subjects of the policies under it. If a user only has policies created which apply to Service resources, the webhook should only be configured to send Service AdmissionReviews to Kyverno. Etc. |
Seconding this bit:
We have run into a dozen situations or more where Kyverno was not in a good state and normal cluster operations could not succeed because of the failing webhooks. |
Linking to the design doc https://docs.google.com/document/d/1Y7_7ow4DgCLyCFQcFVz1vHclghazAKZyolIfprtNURc/edit?usp=sharing. |
Is your feature request related to a problem? Please describe.
Currently Kyverno auto-creates and updates the validating and mutating webhooks, and user changes will be overwritten.
Describe the solution you'd like
Users should be allowed to tune the webhook configurations for their deployments.
Resource filters (currently set via args and configmap) should be applied to the webhook settings, to optimize which requests are handled by Kyverno.
Users should be able to migrate to
failurePolicy=fail
fromfailurePolicy=ignore
.Additional context
See slack discussions:
https://kubernetes.slack.com/archives/CLGR9BJU9/p1623155044256500
https://kubernetes.slack.com/archives/CLGR9BJU9/p1622078250126000
Also see: #893
The text was updated successfully, but these errors were encountered: