-
Notifications
You must be signed in to change notification settings - Fork 15
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement cascading ModRules through execution tiers #91
Comments
Hi @thedatabaseme, I understand. This turned out to be less than practical for two reasons - performance and re-entrance. KubeMod is quite a promiscuous mutating webhook - by default it subscribes to be notified about Create/Update events for a pretty wide range of K8S resources. This puts KubeMod's resource processing logic on the hot path of a lot of Kubernetes operations. I have medium sized Helm charts whose installations lead to hundreds of objects being passed through KubeMod per second. The second issue is more subtle. Contradicting ModRules can cancel each other and lead to infinite loops (flapping changes). And the kicker is that the more ModRules you have in your system, the harder it is to track which combination of ModRules leads to infinitely flapping changes. In addition, any attempt to resolve this with a circuit breaker (say no more than 8 passes) is just a band-aid based on an arbitrary number. I think there may be a better solution to your issue. I think that introducing some sense of order in the rule execution may be just what you need. This way we can avoid both slow multi-pass execution, as well as endless flapping with contradicting rules. Thoughts? |
Having the possibility to give an order or priority to the modrules would be a good solution. |
Ok, I'll plan on implementing and shipping this with the next version. |
One suggestion from my side: ArgoCD has something similar, called sync waves, which is done via an annotation and sets everything by default to sync wave "0", and you can configure the resources to be either executed before everything else (-1,-2,-3...) or after everything else (1,2,3...). Personally I found this to be quite a nice mechanism as it is extremely flexible and easy to use from a users perspective. Example:
This could be translated to:
While i quite like this approach, I'm not a huge fan to pinpoint a behavioral propertoy to metadata information, so my final suggestion would be to to something like this:
|
Hi @orbatschow, Right, this is exactly how it will work - a new integer property on the ModRule CRD (not an annotation), with default value of |
Hi @thedatabaseme, @orbatschow, Please note that this has been released with kubemod 0.16.0. |
Hello,
we have the following situation. We want to replace all possible Docker Hub image specifications (like
docker.io/image:tag
,image:tag
,docker.io/library/image:tag
...) in our Pod manifests with our private image registry. We have done some modrules for that like this:Because we need regcreds for the private registry, we then created a second mutation rule, that adds the regcred secret as
imagePullSecret
.For testing we use this deployment for example:
What we now experience is the following.
image
reference in the Pod work as expected. This applies for all combinations and for all containers and initContainers.regcred
asimagePullSecret
does not match and is not triggered.We found out, that all rules seem to check always the initial
Pod
specification. Cause if we changematchRegex: '^myregistry\.com/.*$'
tomatchRegex: '^docker.io/.*$'
, the rule is triggered and does what it should. This makes me a bit confused cause I assumed, that changing theimage
specification of a Pod will lead to a status update which will lead to the regcred rule to get triggered. We worked around this issue by removing theselect
part in the regcred rule entirely. This now adds regcreds to all Pods regardless if needed or not.So I have the following questions or proposals:
I hope my explanation is comprehensible. If you have any questions, don't hesitate to ask.
Kind regards
Philip
The text was updated successfully, but these errors were encountered: