Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[FEATURE]: Re-use existing Kubernetes manifests without modifying them #5082

Closed
eysi09 opened this issue Sep 13, 2023 · 5 comments · Fixed by #5187
Closed

[FEATURE]: Re-use existing Kubernetes manifests without modifying them #5082

eysi09 opened this issue Sep 13, 2023 · 5 comments · Fixed by #5187
Assignees

Comments

@eysi09
Copy link
Collaborator

eysi09 commented Sep 13, 2023

Feature Request

Background / Motivation

One of Garden's philosophies is automation over abstraction. It should be able to take existing config (e.g. Dockerfiles and K8s manifests) and add the automation that empowers users to re-use it across all stages of software delivery. E.g. for end-to-end testing or for developing in production like environments.

This currently works great with Helm but we're missing a proper story for projects that already have K8s manifests but aren't using Helm.

In that case, you should be able to use the kubernetes Deploy action and just point Garden to the manifests files, something like:

kind: Build
name: api
type: container
---
kind: Deploy
name: api
type: kubernetes
spec:
  files: [./my-manifests.yml]

The problem however is that you need to reference the image build in the Pod spec for the manifest. This means you need to modify every single K8s manifest that references a image from your source code and set the following:

# For a given container stanza in a K8s Pod spec 
name: api
image: ${actions.build.api.outputs.deployment-image-id}

At this point, these manifests aren't valid K8s manifests anymore and you've broken the rest of your toolchain.

It simply doesn't work and you either need to duplicate config or change tooling.

What should the user be able to do?

Drop Garden into an existing K8s project with minimal Garden config and zero modification to existing code / config.

Why do they want to do this? What problem does it solve?

To be able to use Garden to add automation to their software delivery workflows without breaking other tooling.

Suggested Implementation(s)

We have a few options here. In my opinion we should be optimising for something very simple here. It's not power user stuff, rather something that will feature in most getting started / onboarding docs and we really shouldn't have to go out of our way to explain.

Option 1

Add a basic field to the kubernetes Deploy spec such as:

overwriteImage:
  kind: Deployment
  name: api
  containerName: api
  buildAction: api # <--- The Garden build Action

Pros:

  • Relatively simple syntax

Cons:

  • Very limited and users may want to overwrite other fields as well.

Option 2

Add a patchResources field to the spec and that allows the user to identify the resource they want to patch and define the patch proper which would be applied per the json merge patch spec. Note that we'd need to handle containers specifically or require users to list all the containers in the patch because it just overwrites arrays.

Example:

kind: Deploy
name: api
kind: kubernetes
spec:
  files: [./my-manifests.yaml]
  patchResources:
    - kind: Deployment # <--- Could also be Pod or even ReplicaSet 
      name: api
      patch: # General purpose patch, applied via the JSON merge patch spec
        spec:
          replicas: 1 
      imagePatches: # <--- Specifically patch a given container image. Could also be more flexible `containerPatches`  
        - containerName: api
           newImage:  ${actions.api.outputs.deployment-image-id}
           # <--- Optionally support just setting image or image tag?

See inline comments for notes on syntax which is very much up for debate.

Pros:

  • Flexible

Cons:

  • More verbose. But if we imagine the user being a K8s user already (given they have all the manifests) it should feel familiar.

I'm personally partial to this approach or some version thereof.

Option 3

Same as Option 1 but use Kustomize syntax and nest under existing kustomize field:

kind: Deploy
name: api
kind: kubernetes
spec:
  kustomize:
    spec: # <--- Basically what you'd put in kustomize.yaml
      - baseResources: ...
         images: # ...

Pros:

  • Leaning on (somewhat) known concepts

Cons:

  • Kustomize doesn't support passing args to the CLI so we'd to quite some work in terms of parsing the config; finding the base resources; staging to a build dir; creating a legit kustomize file; applying kustomize build
  • Confusing for those not familiar with kustomize

Option 4

Support inline templates in kustomize files.

Pros:

  • Familiar to existing kusomtize users

Cons:

  • You're back to breaking config unless you have a dedicated Garden overlay
  • Also need to render the manifests and write back to the file system before doing kustomize build due its limitations
  • Confusing for those not familiar with kustomize

How important is this feature for you/your team?

🥀 Crucial, Garden is unusable for us without it

The reason for that choice is that I was working on "Gardenifying" the Google microservices demo and wanted to submit it to the repo.

I can't do that without modifying manifests which would never get approved so Garden can't really be used.

@10ko
Copy link
Member

10ko commented Sep 13, 2023

I really like option 2, but I feel it will need to be coupled with a command to print out the rendered manifests, otherwise we risk confusing the user with keeping the manifest shape and values in multiple places. Thanks for writing this up! ❤️

@10ko
Copy link
Member

10ko commented Sep 13, 2023

This would also be a great escape patch when you need to have different shape of the manifests for different environments, not just different values. ✨

@stefreak
Copy link
Member

I really like options 2 and the kustomize options a lot. I think it would be good if we implement both approaches. I already ran into an issue where inline kustomizations would have been a solution a while ago: #4503

@twelvemo
Copy link
Collaborator

I am also in strong favor of option 2 and the inline kustomize patch, i would start with option 2 since it seems a bit more easy to understand for users that haven't used kustomize before. Option 1 could quickly become another container action situation where we are being asked to add more fields that can be patched.

Regarding option 2 instead of using json merge patch we could also rely on kubectls strategic merges, which already support patches with different merge strategies that for example allow adding an element to an array. The implementation with using kubectl patch could be quite straight forward, since it just allows you to patch a kubernetes manifest with the dry-run option.
Example:
original.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: patch-demo
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: patch-demo-ctr
        image: nginx
      tolerations:
      - effect: NoSchedule
        key: dedicated
        value: test-team

patch.yaml

spec:
  template:
    spec:
      containers:
      - name: patch-demo-ctr-2
        image: redis

Patching it with the merge strategy:

$kubectl patch -f orig.yaml --patch-file patch.yaml --dry-run=client --local=true --type=strategic -o yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: patch-demo
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - image: redis
        name: patch-demo-ctr-2
      - image: nginx
        name: patch-demo-ctr
      tolerations:
      - effect: NoSchedule
        key: dedicated
        value: test-team

@eysi09
Copy link
Collaborator Author

eysi09 commented Sep 14, 2023

Thanks for the feedback, that's great stuff.

Regarding using Kustomize for this, that would probably have been my first choice if it were a little easier to handle. But kustomize doesn't support passing variables to the CLI. So you'd need to read the config, write a kustomize.yaml file to a staging dir with all the base layers correctly set, and then call the CLI directly.

It's a pretty annoying limitations, e.g. tracked here: kubernetes-sigs/kustomize#1113

We can keep an eye on this issue and look into adding kustomize inline support if its gets resolved.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants