-
Notifications
You must be signed in to change notification settings - Fork 17
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow users to upgrade Policy Server using Helm chart #52
Comments
Looked into this, and I think it is because the changes to the policyServer are part of the post-install hook. It seems that post-install hooks are not tracked by Helm, once they run, and can't be updated: https://helm.sh/docs/topics/charts_hooks/#hook-resources-are-not-managed-with-corresponding-releases. Tried a |
It seems that for being able to upgrade the default policy-server, it would need to be on its own chart. It's possible to use subcharts:
We will need to wait for |
helm upgrade
helm upgrade
Also, we will need to ship a minimum of policies so the Kubewarden stack is secure (e.g: no privileged pods on same nodes), looking at kubewarden/kubewarden-controller#129. Maybe it is worth it to write an RFC on how the Kubewarden subcharts are laid out. |
I've being working on this to allow policy server upgrades and how to install the resources in a proper order to avoid weird issue like reported in kubewarden/kubewarden-controller#110. However, I found issues during my tests and I would like to discuss with you. I've created a job to check if the controller webhook is reachable. But it works fine only if we do not install the Policy Server together. This happens because hook is the last thing to run. So, it will be too late to check if the service is reachable or not. Helm already tried to deploy the Policy Server and, potentially, failed. Thus, I cannot coordinate the installation of the controller and Policy Server purely using Helm charts. Using Helm sub-charts, as mentioned in #52 (comment), do not help either. Helm merges all the templates from all the sub-charts and the parent chart into a single set. Which replicates the problem mentioned previously. With that in mind, I can see two options now:
I would say to go with option 2. The issue with the time it takes to get the controller reachable is a known issue and we can workaround in different ways. But we still can install the policy server enabling a config. Any thoughts? |
Another idea that I just had is change the controller removing the webhook for the policy server and moving the logic of adding the finalizers in a reconcile loop. I think this is the "right" fix |
I disagree on this one. I think webhooks are important so invalid or incoherent resources can be identified before they are even persisted. This can only be done with webhooks, and I think it makes logic easier because if we allow the persistence of invalid or incoherent resources, this means that our Reconcile() logic has to be defensive against this cases, making controller logic more complex to maintain and fragile. In my opinion, and given the complexity proved to come from this timing problems, I would consider not installing a default policy server with the helm chart. Installing a default policy server would become another step in the documentation. Another variant of my solution would be that the controller when it starts, it reconciles a default policy server if missing. This behavior can be disabled with a flag on the controller. |
Thanks José for looking into this and coming up with several paths! I would skip (2) as it complicates updating and tracking the state of the default policy-server.
This solves the issue we are talking on (reconfiguring the policy-server), but makes automatic upgrades/reconfiguration more complicated. My vote goes for (1), a The more I think of this problem (and the problem of installing several charts that depend on each other), the more it reminds me on why we did https://github.com/rancher-sandbox/hypper. I don't want to tell Kubewarden users to install with hypper instead of helm, though. Just, that we thought about the problem, and I think it's a real shortcoming of helm. |
Can we remove the |
Thank you all for the comments! Okay, let's remove the default policy server from the @kubewarden/kubewarden-documentation , can you help us with the docs? :)
Yes, I'll remove the hook. The hook is not necessary for the CRDs. AFAICS, it try to coordinate the installation of the Policy Server. Because the Kubewarden controller should be running before trying to deploy a policy server. Due to the time sometime takes to the controller be ready to handle requests, the Helm installation can fail. |
@jvanz Sure, to my best understanding what we want to document is
Once we decide which policies to incorporate as a default, we'll need to document that as well. Did I get the expectation right? Where all do we expect for this to be changed? Architecture and Quickstart are sections I can readily think of. |
Yes. :)
I'm reading the docs and at first glance the docs already explain the user how to deploy a Policy Server. So, we may not need to document that. I'm wondering if we need to change something at all. Maybe a warning after the instructions how to install Kubewarden stack telling that since Helm chart version |
Ey, I didn't realise of this. If that's all it takes, then I prefer to just do that 🤦♂️ , and add a new values.yml boolean option for "install default policy-server". Less hassle for the user, less charts to install, and the default policy-server can still be upgraded. |
Just to keep documented here as well. Removing the post-install hook will allow us to upgrade the policy server. But what I'm trying to do is find a way to mitigate the issue #52 too
From: #65 (comment) |
I've updated the PR adding a Helm chart for the Policy Server |
I'm moving this issue to block due the good arguments from @viccuad in this comment #65 (review). So, block this issue until we decided what to do with the default policies installed in the chart. |
After a team conversation, I'm converting this into an epic. Because we need to do other tasks before merging this changes. |
helm upgrade
After installing the Kubewarden stack it is not possible to change configuration values with the
helm upgrade
commandReproducible steps
$ cat values.yaml policyServer: replicaCount: 2 image: tag: "v0.2.4" $ helm upgrade --namespace kubewarden --values values.yaml kubewarden-controller kubewarden/kubewarden-controller
$ kubectl get policyservers default -o=jsonpath="{['.spec.image', '.spec.replicas']}" ghcr.io/kubewarden/policy-server:v0.2.5 1%
The text was updated successfully, but these errors were encountered: