Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Kustomize filter kinds on "base" input #4436

Closed
epcim opened this issue Feb 1, 2022 · 6 comments
Closed

Kustomize filter kinds on "base" input #4436

epcim opened this issue Feb 1, 2022 · 6 comments
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. triage/unresolved Indicates an issue that can not or will not be resolved.

Comments

@epcim
Copy link
Contributor

epcim commented Feb 1, 2022

Is your feature request related to a problem? Please describe.

There were already closed discussion whether Kustomize shall remove resources from it's "base" source and these IMO were closed as not a good practice.

I would like to reopen this or rather say more discussion/document what is "bases" for Kustomize. Sorry If I did missed some other issue with the same topic.

In the example bellow, you see Helm chart of longhorn provides this job, with annotation used to be executed on deployment removal:

  name: longhorn-uninstall
  annotations:
    helm.sh/hook: pre-delete

Why I would like to bring it to light again. This Helm chart in this case is "bad base" for the Kustomize. In fact not a direct bug of Kustomize itself, we could blame longhorn, but it's clear that some deployment annotations (not strictly Helm related) are being used for app life-cycle, but Kustomize itself will never take care of them.

Let's not speak about "filtering" resources only. We may want to add annotations, to mark, not to apply specific resource later. Obviously Kustomize is not yet in this business (not sure but kpt live applly - would treat that better?)

With KRM Fn available, with some more time with the project, how does the "Kustomize" team does feel what "bases" are and how they shall be used?

  • Do we strictly expect "private" base (where one would do overrides for dev/prod etc)?
  • Do we suppose, "base" code for application could be shared resource and Kustomize serve as swiss-knire, to overrides/patch on specific deployment?

If the 2nd is true, and many people use it this way from I have seen, then Helm or any KRM Fn can serve as generic manifest source. However without a primary control of such source (as it probably reside on some public repo, and you can only control it's global values.yaml etc..) one then miss the capability to pick only the Kinds wanted (or filter some not desired, not only transform them).

Imagine this source:

# cat ./longhorn/Kustomization 
# https://github.com/longhorn/charts/blob/master/charts/longhorn/values.yaml

namespace: longhorn-system

generatorOptions:
  disableNameSuffixHash: true

helmChartInflationGenerator:
- chartName: longhorn
  chartRepoUrl: https://charts.longhorn.io
  #chartVersion: 
  releaseName: longhorn
  releaseNamespace: longhorn-system
  valuesMerge: override
  extraArgs:
  - --include-crds
  valuesLocal:
    installCRDs: true
    fullnameOverride: longhorn

defaultSettings:
      defaultDataPath: "/var/lib/longhorn"

That will render, besides other manifests this, and guess (yes it's immutable if it exist so at first apply will fail, but on each apply you force it effectively do the task - uninstall whole. longhorn, including it's data.

---
apiVersion: batch/v1
kind: Job
metadata:
  annotations:
    helm.sh/hook: pre-delete
    helm.sh/hook-delete-policy: hook-succeeded
  labels:
    app.kubernetes.io/instance: longhorn
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: longhorn
    app.kubernetes.io/version: v1.2.3
    helm.sh/chart: longhorn-1.2.3
  name: longhorn-uninstall

Recent discussion, resources

Describe the solution you'd like

  • native way to filter kinds, based on name regex, labels and annotations
  • update documentation in relation to what "bases" are considered, what Kustomize plays a role in "life-cycle" operations as shown in this example

Describe alternatives you've considered

  • KRM Fn to filter Kinds would work, but seems like too complex and time consuming to run for this case.
@epcim epcim added the kind/feature Categorizes issue or PR as related to a new feature. label Feb 1, 2022
@k8s-ci-robot
Copy link
Contributor

@epcim: This issue is currently awaiting triage.

SIG CLI takes a lead on issue triage for this repo, but any Kubernetes member can accept issues by applying the triage/accepted label.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added the needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. label Feb 1, 2022
@KnVerey
Copy link
Contributor

KnVerey commented Feb 2, 2022

Do we strictly expect "private" base (where one would do overrides for dev/prod etc)?

Creating variants is indeed Kustomize's speciality, and we eschew features that would overcomplicate that basic use case. The doc you linked to does describe the intended workflow for when a base not under your control is not doing what you want:

If the underlying base is outside of one’s control, an OTS workflow is the recommended best practice. Fork the base, remove what you don’t want and commit it to your private fork, then use kustomize on your fork. As often as desired, use git rebase to capture improvements from the upstream base.

While I'm sympathetic to the desire not to maintain a fork, all of the reasons the doc explains for eschewing deletions as a feature still hold true in my opinion. You rightly point out that we are investing in extensions mechanisms (KRM functions), and these can be used to create behaviour that is out of scope for Kustomize core, which could include deletion behaviour or even something helm-lifecycle-specific.

what Kustomize plays a role in "life-cycle" operations as shown in this example

Kustomize is strictly focussed on client-side configuration management. The tool itself is not involved in or aware of lifecycle operations, and that increase in scope is not an option for us. For the helm example in particular, perhaps there is a feature that could make sense on the helm transformer, once it is extracted out of Kustomize per #4401 .

@KnVerey KnVerey added triage/unresolved Indicates an issue that can not or will not be resolved. and removed needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels Feb 2, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 3, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jun 2, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue.

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. triage/unresolved Indicates an issue that can not or will not be resolved.
Projects
None yet
Development

No branches or pull requests

4 participants