Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support for mixins #759

Closed
sergey-shambir opened this issue Feb 2, 2019 · 17 comments
Closed

Support for mixins #759

sergey-shambir opened this issue Feb 2, 2019 · 17 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@sergey-shambir
Copy link

sergey-shambir commented Feb 2, 2019

Kustomize currently interprets each overlay as a full set of resources and patches: each patch can modify only resource which is listed directly in resources: or indirectly within bases:
This means that collecting resource and patch group together to use them later is impossible.
Currently I can create following overlay hierarchy:

  1. Overlay base without any bases: defines cluster skeleton with services and pods
  2. Overlay base_debug which inherits base and enables debug tools included into containers (these debug tools disabled by default to simplify using the same container on production and in test environment)
  3. Overlay base_debug_aws which adds AWS configs and secrets for services
  4. Overlay base_debug_aws_scale_hard which adds a lot of replicas for each service to test horizontal scaling
  5. Final overlay test_develop which contains configuration for concrete test environment available at concrete domain

This looks like inheritance in OOP (with multiple inheritance when you have multiple bases).

Of course, I can mix changes from steps 2-4 into base or final overlays. I also can create several bases (one for skeleton and a few bases for things like secrets/configmaps) and maintain patches in place from which multiple overlays can re-use this patch.

But from my point of view, it's better to allow mixins: overlays that contains patches for resources that overlay doesn't have. Like this:

  1. Overlay base defines pods a_deployment.yaml and b_deployment.yaml
  2. Overlay debug defines patch a_deployment_debug.yaml, but includes neither base to it's bases nor a_deployment.yaml to it's resources
  3. Overlay aws adds new resource aws_secret.yaml and new patch b_deployment.yaml
  4. Overlay scale_hard adds patches with a lot of replicas for a_deployment.yaml and b_deployment.yaml
  5. Overlay test_develop combines base, debug, aws, scale_hard in predicted order.

It's possible to change kustomize in (at least) two ways:

  1. Allow overlays included with bases list to patch resources defined in previous base in the same list:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
bases:
  - ../base # defines pods A and B
  - ../mixins/debug # defines patch for A, but does not add A as resource or `base` as it base kustomization
 - ../mixins/aws # defines patch for B, but does not define B
  1. Add separate mixins: key which adds Mixin, which is overlay which can patch resources that it does not define, and process mixins when all bases are already processed.
@sergey-shambir
Copy link
Author

Looks like related to #727

@paveq
Copy link

paveq commented Mar 4, 2019

It would be very useful to be able to "compose" an application from individual components, without having to rely on inheritance chain.

As example, application base layer should be able to refer to a database service, which is not part of the application, nor should the application base layer extend it. Instead, the final kustomize layer should be able to mix and match the application with possible different DB sizes / bases.

@fentas
Copy link
Contributor

fentas commented Mar 11, 2019

I really like the idea of a sperate key like mixins: allowing to load a group of patches without the need to reference bases there.

We also would like to have a structure like

.
├── kustomize
│   ├── base
│   │   └── # all the base services / resources
│   ├── overlay
│   │   └── # a collection of base services for different environments (prod/dev/etc.)
│   └── patches
│        └── # different sets of general patches (e.g. high availability changes etc.)
└── playbooks
     └── # <*n physical environments>
          ├── patches
          │   └── # specific patches
          └── # uses an overlay as base, adds specific patches and some general patches

The last part (within playbooks) is kind of a pain because I have to point at each patch/resource within a general patch which has to be maintained duplicate times (over multiple playbooks) instead of pointing just to a collection.
Also doing patches this way is not possible right now throwing a security issue (can't going back to dir tree).

edit to get around the security issue a workaround would be to create a symlink pointing back in the file tree.

@kid
Copy link

kid commented May 24, 2019

One use case for this would be a multi-tenant system with multiple release channels and resources allocations per tenants.

To do this currently, one would need to create one base for each size/release channel combination.

With mixins, this could be achieved by having one mixin to override images, and another one to set resources

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 22, 2019
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. labels Sep 21, 2019
@dsyer
Copy link

dsyer commented Nov 6, 2019

This issue should stay alive. Lack of activity is no measure of interest here. We are waiting for something to actually happen.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 4, 2020
@dsyer
Copy link

dsyer commented Feb 4, 2020

Bump.

@pgpx
Copy link
Contributor

pgpx commented Feb 5, 2020

I created a PR for #1251 that essentially does this - #2168, though I added a new Kind (KustomizationPatch) that works as a 'mixin' from the initial comment in this thread. A better name for KustomizationPatch is needed though!

@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Mar 6, 2020
@dantman
Copy link

dantman commented Mar 6, 2020

/remove-lifecycle rotten

@zishan
Copy link

zishan commented May 28, 2020

@pgpx looks like your #2168 is related to kubernetes/enhancements#1803

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 26, 2020
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Sep 25, 2020
@Shell32-Natsu
Copy link
Contributor

Looks like components has solved this issue. https://github.com/kubernetes-sigs/kustomize/blob/master/examples/components.md

/close

@k8s-ci-robot
Copy link
Contributor

@Shell32-Natsu: Closing this issue.

In response to this:

Looks like components has solved this issue. https://github.com/kubernetes-sigs/kustomize/blob/master/examples/components.md

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests