Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support patches for containers #5354

Open
2 tasks done
RobinMcCorkell opened this issue Oct 2, 2023 · 7 comments
Open
2 tasks done

Support patches for containers #5354

RobinMcCorkell opened this issue Oct 2, 2023 · 7 comments
Labels
kind/feature Categorizes issue or PR as related to a new feature. triage/needs-information Indicates an issue needs more information in order to work on it.

Comments

@RobinMcCorkell
Copy link

RobinMcCorkell commented Oct 2, 2023

Eschewed features

  • This issue is not requesting templating, unstuctured edits, build-time side-effects from args or env vars, or any other eschewed feature.

What would you like to have added?

Specify a container patch to be applied to all containers and initContainers inside a resource.

Why is this needed?

Patching multiple containers across multiple resources is currently awkward:

  • Patches need to specify the container name, no wildcards supported
  • Attempting to patch container "foo" across multiple resources, only some of which contain a "foo", causes those without such a container to gain a new "foo"
  • Patches apply to one resource kind at a time, so the patch needs to be duplicated for different kinds

Horizontal patching across all containers is useful for cross cutting concerns, e.g. common environment variables.

Suggested syntax:

# kustomization.yaml
containerPatches:
  - path: containerpatch.yaml
    parentTarget: # matches the parent resource
      kind: ...
      ...

# containerpatch.yaml
apiVersion: core/v1
kind: Container
spec: # will be merged in standard patch fashion
  env:
    - name: MY_HORIZONTAL_ENV
      value: foobar

Can you accomplish the motivating task without this feature, and if so, how?

Two solutions exist today:

  1. Duplicated patches, one for each resource kind and container name, applied to the relevant resources. This is fairly verbose and fragile though.
  2. Replacements with the wildcard feature to select all containers. This doesn't support merging inside the resulting key though, so e.g. appending to a list inside the target key does not work.

What other solutions have you considered?

Patches, JSON patches, replacements. All have limitations and shortcomings.

Anything else we should know?

It's tempting to try to design a solution that works for any sort of collection inside resources, e.g. ingress routes. This quickly becomes a tarpit of edge cases though, which is why I want to focus only on the "patch all containers" case.

Feature ownership

  • I am interested in contributing this feature myself! 🎉
@RobinMcCorkell RobinMcCorkell added the kind/feature Categorizes issue or PR as related to a new feature. label Oct 2, 2023
@k8s-ci-robot k8s-ci-robot added the needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. label Oct 2, 2023
@kxk
Copy link

kxk commented Oct 5, 2023

Agreed. This would greatly increase efficiency in using replacements across multiple container / initContainer pods.

@natasha41575
Copy link
Contributor

I think we need some more information to further understand your issue and how to approach a solution.

We have a few specific questions:

  1. Could you clarify what patches you are referring to? We have patchesStrategicMerge and patchesjson6902 which are very different in syntax & limitations. If you are referring to both, it would be helpful to have specific examples of each with your use case. It seems like you might be asking for a patch to work across multiple different GVKs, and our answer will depend on what type of patch you are talking about.

  2. Could you elaborate on why the the two existing solutions do not work?

Duplicated patches, one for each resource kind and container name, applied to the relevant resources. This is fairly verbose and fragile though.

Could you provide a minimal example?

Replacements with the wildcard feature to select all containers. This doesn't support merging inside the resulting key though, so e.g. appending to a list inside the target key does not work.

It's not clear to us why replacements doesn't work here. For example, we understand that appending to a list inside the target key does not work, but the recommended workflow in this case is to have placeholder values in the base kustomization file. For example:

# in your base kustomization file
  env:
    -  name: PLACEHOLDER_ENV_VALUE
      value: foobar

Then, you can use replacements to update the placeholder value and get your desired output:

  env:
    - name: MY_HORIZONTAL_ENV
      value: foobar

@koba1t If you have some more examples of how to use the create option in replacements for this feature, please feel free to leave a comment.

  1. We are not sure if you are trying to patch a Container object like in your example, or a container/initContainer within a PodSpec. If it's the former, we are a little confused by your statement:

Patches need to specify the container name, no wildcards supported

When you are patching a resource, the target is specified by the GVK, name, namespace, and/or label/annotation selectors. You can leave any of these fields out, and it functions like a wildcard, e.g. if you specify only the GVK, it will apply the patch to all resources matching that GVK, and there is no need to specify the name.

If you are trying to patch a container within a podspec, again some examples would help us understand what it is you are trying to do.

/triage needs-information

@k8s-ci-robot k8s-ci-robot added triage/needs-information Indicates an issue needs more information in order to work on it. and removed needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels Oct 25, 2023
@RobinMcCorkell
Copy link
Author

I'm mainly talking about strategic merge patches, though I suspect the same issue applies to JSON patches too since JSON patches don't support patching multiple things in the same manifest (e.g. multiple containers in the same StatefulSet).

Here's a good example of strategic merge patches at work, and which also demonstrates how attempting to achieve my initial request would be difficult: https://github.com/kubernetes-sigs/kustomize/blob/master/examples/patchMultipleObjects.md

In that example, the patch defines a container with name: istio-proxy, which doesn't exist in either existing Deployment and therefore gets added to the containers array.


Now imagine that instead we wanted to patch the env of one of the containers. Still easy:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: not-important
spec:
  template:
    spec:
      containers:
        - name: nginx
          env:
            - name: NEWENV
              value: newenv

Except oops, now the second Deployment has an nginx container added because it didn't have one before. No issue, we'll just use name/label/annotation selectors to limit the patch to just those with an nginx container.


Now imagine that instead we wanted to patch every container, regardless of name. I tried removing the name in my patch, but that just breaks everything (containers becomes empty). So I have to specify a container name in my patch, but this means that I need a different patch for every container name, and then I need to do a lot of name/label/annotation selection so I don't end up with blank containers ending up on resources that don't have a matching container.

This is extremely fragile as the number of resources goes up, potentially controlled by different people who may not be as familiar with the menagerie of patches as I am.


So it seems like patches are out. What about replacements? Thankfully, replacements do have a wildcard feature, so we no longer need to care about the container name (one slight complication: if we try to replace into initContainers when it doesn't exist on a resource, we get an error. No trouble, name/label/annotation selectors to the rescue again). But replacements aren't patches!

Here's an example of when replacements aren't patches: what I want to describe is "add these env settings to my containers". With replacements though, I can either say "replace the entire env with this other env" (throwing away any that were there before) or "add this one specific env to the env of my target(s)" (meaning I now need a replacement per env setting - which is going to get cumbersome real fast).


To recap:

  • Strategic patches don't work here, because leaving the container name blank causes significant weirdness, and having a patch-per-container quickly becomes unmanageable
  • Replacements are limited since they can only entirely replace a key, and cannot merge things in the same way patches can

Right now in production, I am using replacements, which results in a few hundred lines of replacements, dummy resources that are just used as replacement sources, and a whole bunch of annotations to make sure I'm applying to the right resources. Not great, but workable.

What I really want to describe is a strategic patch that applies to all sub-resources within an outer resource. Right now, I'm feeling the pain with containers and initContainers, which is why my issue focuses on that.

@leonardochaia
Copy link

Hi all,
After a lot of trial and errors, I have arrived to this issue.

I am trying to implement pretty much the same as @RobinMcCorkell.
We have a set of Deployments for which we don't know the name beforehand, and we want to patch them to add volumes, volumeMounts, envFrom, etc. Some Deployments have already defined these fields, but some do not.

As @RobinMcCorkell said:

Strategic patches don't work here, because leaving the container name blank causes significant weirdness, and having a patch-per-container quickly becomes unmanageable
Replacements are limited since they can only entirely replace a key, and cannot merge things in the same way patches can

I ended up writing a little plugin to initialize these fields so that I don't have to initialize them empty in all deployments, and I use json patch to add the items that I need, now that the lists are always defined. The plugin is pretty much for my specific use case, this does seem like a common use case, which basically translates to "add an item to a list, creating the list if it does not exist", which I know has been asked before and I know SMP does help for this when you know the name if the container/s in advance. However, when you don't know the names of the container in advance, neither SMP nor json patches seem to be able to "add an item to a list, creating the list if it does not exist", keeping the contents of the list when it does exist.

@siegenthalerroger
Copy link

siegenthalerroger commented Jan 4, 2024

I've also run in to the same problem that's described here and @leonardochaia described the issue very well

when you don't know the names of the container in advance, neither SMP nor json patches seem to be able to "add an item to a list, creating the list if it does not exist", keeping the contents of the list when it does exist.

, however it bares mentioning, that replacements also can't do this from what I could tell (AFAICT #4469 wasn't actually implemented in #4886 as per the description itself).

In our case we want to have a certain envFrom entry on all first containers of deployments with a certain label, where some of them may define other envFrom entries and others will not and be an empty list.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 3, 2024
@quentinus95
Copy link

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 18, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature. triage/needs-information Indicates an issue needs more information in order to work on it.
Projects
None yet
Development

No branches or pull requests

8 participants