Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Mutate existing resource on policy update #1607

Closed
electrical opened this issue Feb 16, 2021 · 16 comments · Fixed by #3669
Closed

Mutate existing resource on policy update #1607

electrical opened this issue Feb 16, 2021 · 16 comments · Fixed by #3669
Assignees
Labels
end user This label is used to track the issue that is raised by the end user. enhancement New feature or request mutation Issues pertaining to the mutate ability.

Comments

@electrical
Copy link

Is your feature request related to a problem? Please describe.
Not sure if it's a bug or a feature.
I've got a mutating policy that injects some stuff for vault.
When I update that policy and modify for example the image tag there is no way that I could find to run the mutation again to update the deployments.
The only functional way is to delete the deployment and install it which obviously is not what I want.

Describe the solution you'd like
A way to run the mutations on the resources it would mutate

Describe alternatives you've considered
Deleting a deployment which is not really functional.
Running an update / apply doesn't trigger it either

@electrical electrical added the enhancement New feature or request label Feb 16, 2021
@realshuting realshuting changed the title Mutating policy update Mutate existing resource on policy update Feb 25, 2021
@emraanali11
Copy link

A mutate policy which adds labels to namespace, only takes effect for new/updated resources, this creates an inconsistency as older resources has to be modified manually.
The feature is also required in case kyverno goes down and a new resource is created during the duration and fix has to be introduced manually after tracking it, another use case is when policies are added to a existing k8 cluster, all the older resources needs to be modified.

Some other mutate policies for will need similar things:

  1. Adding labels to namespace, deployments etc
  2. Adding resource, limits to deployments, statefulset
  3. Disallow service creation of type loadbalancer.

@realshuting
Copy link
Member

Link to the Slack conversation.

@chipzoller chipzoller added the mutation Issues pertaining to the mutate ability. label Jul 15, 2021
@realshuting realshuting added the end user This label is used to track the issue that is raised by the end user. label Aug 4, 2021
@yogeek
Copy link

yogeek commented Sep 12, 2021

Very interested by this feature also.
I think letting the policy writer choose if the policy must be applied to existing resource or only to new ones would be ideal.
I think it would allow to adress issues like this one for example : argoproj/argo-cd#2437 (using kyverno with argocd is to me the best solution to answer to this kind of need)

@pixelsnbits
Copy link

I have a potential use case where we would already have a primary ConfigMap (data needed for admin infrastructure functions, not available for modification by non-admin) and a secondary ConfigMap (data pulled from a git repo or some other source, accessible by non-admin/dev) that is instantiated when a cluster is bootstrapped. Then we install Kyverno and apply our base policies after those CMs are already created. One of those policies would ideally be a mutation where I would be able to have the secondary ConfigMap feed into that primary ConfigMap with a mutate that runs on an expected schedule as the secondary ConfigMap can be expected to change. This would also necessitate a practice or function in Kyverno to append/prepend/concat one ConfigMap into another. From what I understand there is only the ability to replace an object in whole through patchJson6902.

@jcam
Copy link

jcam commented Mar 24, 2022

This would be useful to me as well. I am attempting to eliminate podpreset and kube-graffiti from my clusters as both are essentially unmaintained, serve a similar function, and are limited in functionality.
podpreset will mutate pods even when deployed by a Deployment, which allows me to update the podpreset policy and then perform a rollout restart on the deployment to pick up the change.
kube-graffiti will mutate anything, and will do so immediately on policy change to all existing resources as well.
this is great because I know that all resources, regardless of when created, will be in sync with the policy.

kyverno doesn't touch pods that are deployed by a deployment controller, and also doesn't modify existing deployments, so i need to manually redeploy all 200+ apps to pick up any policy changes

@realshuting
Copy link
Member

The design is proposed via kyverno/KDP#4, we are targeting this support in 1.7.0.

Please review the design and let us know if it cannot address your use case.

@Geethree
Copy link

Another use case for mutate existing.

Let's say I have two deployments A and B. For whatever reason, I'd like for container[0] to always match between the two deployments. Normally, this is handled by deployment pipelines.. but occasionally manual intervention is required such as rollbacks.

As such, having the ability to write a policy that always ensures that the images match between deployments would be nifty.

@realshuting
Copy link
Member

@Geethree - Kubernetes doesn't allow you to change a container's image once it's created. You need to do that before the second Deployment is created (via CI or admission mutation).

@Geethree
Copy link

Geethree commented Apr 29, 2022

Yeah I'd like for Kyverno to keep a Deployment.spec.template.spec.containers[0].image in sync between A and B?

@chipzoller
Copy link
Member

@Geethree - Kubernetes doesn't allow you to change a container's image once it's created. You need to do that before the second Deployment is created (via CI or admission mutation).

It actually does (source):

"Pod updates may not change fields other than spec.containers[*].image, spec.initContainers[*].image, spec.activeDeadlineSeconds or spec.tolerations. For spec.tolerations, you can only add new entries."

@chipzoller
Copy link
Member

@electrical and @emraanali11 (and others following along), with Kyverno 1.7.0 your use cases will be solvable. See below for working policies.

Update Image Tag

apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
  name: update-image-tag
  annotations:
    policies.kyverno.io/title: Update Image Tag
    policies.kyverno.io/category: other
    policies.kyverno.io/severity: medium
    policies.kyverno.io/subject: Deployment
    kyverno.io/kyverno-version: 1.7.0
    policies.kyverno.io/minversion: 1.7.0
    kyverno.io/kubernetes-version: "1.23"
    policies.kyverno.io/description: >-
      This policy updates the image tag on existing Deployments which have the given annotation set but
      only for a single container.
      It may be necessary to grant additional privileges to the Kyverno ServiceAccount,
      via one of the existing ClusterRoleBindings or a new one, so it can modify Deployments.
spec:
  mutateExistingOnPolicyUpdate: true
  rules:
  - name: update-image-tag-rule
    match:
      any:
      - resources:
          kinds:
          - Deployment
          annotations:
            vault.hashicorp.com/agent-inject: "true"
    mutate:
      targets:
        - apiVersion: apps/v1
          kind: Deployment
      patchStrategicMerge:
        spec:
          template:
            spec:
              containers:
              - (name): vault-agent
                image: vault:1.5.4

Label Existing Namespaces

apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
  name: label-existing-namespaces
  annotations:
    policies.kyverno.io/title: Label Existing Namespaces
    policies.kyverno.io/category: other
    policies.kyverno.io/severity: medium
    policies.kyverno.io/subject: Namespace
    kyverno.io/kyverno-version: 1.7.0
    policies.kyverno.io/minversion: 1.7.0
    kyverno.io/kubernetes-version: "1.23"
    policies.kyverno.io/description: >-
      There must be some AdmissionReview request which pertains to some Namespace
      to trigger this policy. As long as this rule exists, Kyverno will manage
      this label on all target resources, either re-adding or replacing the label value.
spec:
  mutateExistingOnPolicyUpdate: true
  rules:
  - name: label-existing-namespaces-rule
    match:
      any:
      - resources:
          kinds:
          - Namespace
    mutate:
      targets:
        - apiVersion: v1
          kind: Namespace
      patchStrategicMerge:
        metadata:
          labels:
            existing: newlabelvalue

@chipzoller
Copy link
Member

chipzoller commented May 30, 2022

@pixelsnbits your use case should now also be solved in the forthcoming Kyverno 1.7.0. This type of policy will work (test with RC2):

Concatenate ConfigMaps

apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
  name: sync-cms
spec:
  mutateExistingOnPolicyUpdate: false
  rules:
  - name: concat-cm
    match:
      any:
      - resources:
          kinds:
          - ConfigMap
          names:
          - cmone
          namespaces:
          - foo
    mutate:
      targets:
        - apiVersion: v1
          kind: ConfigMap
          name: cmtwo
          namespace: bar
      patchStrategicMerge:
        data:
          key: "{{@}} plus {{request.object.data.keyone}}"

@viveksahu26
Copy link
Collaborator

Hey @chipzoller , Is there any way to mutate existing resources without providing target resource. For, example the below policy should mutate selected resource when it's created or updated. It works fine when resource is created. But doesn't work when resource is updated.

apiVersion : kyverno.io/v1
kind: ClusterPolicy
metadata:
  name: add-default-resources
  annotations:
    policies.kyverno.io/title: Add Default Resources
    policies.kyverno.io/category: Other
    policies.kyverno.io/severity: medium
    kyverno.io/kyverno-version: 1.6.0
    policies.kyverno.io/minversion: 1.6.0
    kyverno.io/kubernetes-version: "1.23"
    policies.kyverno.io/subject: Pod
    policies.kyverno.io/description: >-
      Pods which don't specify at least resource requests are assigned a QoS class
      of BestEffort which can hog resources for other Pods on Nodes. At a minimum,
      all Pods should specify resource requests in order to be labeled as the QoS
      class Burstable. This sample mutates any container in a Pod which doesn't
      specify memory or cpu requests to apply some sane defaults.
spec:
  background: true
  rules:
  - name: add-default-requests
    match:
      any:
      - resources:
          kinds:
          - Pod
    preconditions:
      any:
      - key: "{{request.operation}}"
        operator: In
        value:
        - CREATE
        - UPDATE
    mutate:
      patchStrategicMerge:
        spec:
          containers:
            - (name): "*"
              resources:
                requests:
                  +(memory): "200Mi"
                  +(cpu): "200m"

And take this resource:

apiVersion: v1
kind: Pod
metadata:
  name: nginx-demo11
  labels:
    app: myapp-1
spec:
  containers:
  - name: nginx
    image: nginx:latest

@chipzoller
Copy link
Member

Those are immutable fields once a Pod is created.

@viveksahu26
Copy link
Collaborator

Those are immutable fields once a Pod is created.

Oh, got it. Any docs or ref. contains full of immutable fields ?

@chipzoller
Copy link
Member

https://kubernetes.io/docs/concepts/workloads/pods/#pod-update-and-replacement

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
end user This label is used to track the issue that is raised by the end user. enhancement New feature or request mutation Issues pertaining to the mutate ability.
Projects
None yet