Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Resource ownership between charts #2388

Open
helgi opened this issue May 4, 2017 · 16 comments
Labels
bug

Comments

@helgi
Copy link
Contributor

@helgi helgi commented May 4, 2017

I have a scenario where 2 charts installed via helm both think they own identically named resource (f.ex. secret), which generally isn't a problem until you want to helm delete one of them.

Lets take a simple example: Chart dingo has a Secret named db-access and then later a new cool chart named baby is created, which also happens to have a Secret named db-access. Both are in the same namespace.

helm status on both of the releases tells me they have db-access however when I run helm delete --purge dingo I do not see the db-access after running helm status baby, it is as if the dingo ate the baby resource... so to speak.

What is the expected behaviour and potential fixes?

Few things come to mind:

  • Only remove resources that are not in other releases
  • Installation fails if there are identically named resources (of the same resource type) in a different release
  • A flag (or flags) to support either or both of the above scenarios, or the inverse
@helgi

This comment has been minimized.

Copy link
Contributor Author

@helgi helgi commented May 4, 2017

helm version
Client: &version.Version{SemVer:"v2.4.1", GitCommit:"46d9ea82e2c925186e1fc620a8320ce1314cbb02", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.4.1", GitCommit:"46d9ea82e2c925186e1fc620a8320ce1314cbb02", GitTreeState:"clean"}
@krancour

This comment has been minimized.

Copy link
Contributor

@krancour krancour commented May 5, 2017

This seems to me like a glaring deficiency in the charts themselves-- both of them. Ideally, all resources in a chart are prefixed with the chart name and release name specifically to avoid these sort of naming conflicts in the event that two releases share a namespace.

@krancour

This comment has been minimized.

Copy link
Contributor

@krancour krancour commented May 5, 2017

In fact, when you helm new <name>, the scaffolding for your new chart even includes a template partial that's intended to be used... and re-used... and re-used ad-nauseum throughout the chart to make prefixing resource names with chart name and release name quasi-easy.

@jchauncey

This comment has been minimized.

Copy link
Contributor

@jchauncey jchauncey commented May 5, 2017

The problem with that thought though is that there is no real way to enforce that model against the chart. So if we allow developers to create charts that are used to deploy their applications there are bound to be naming collisions at some point. I think having a better linting system to enforce rules would be a good first step.

see #1495

@helgi

This comment has been minimized.

Copy link
Contributor Author

@helgi helgi commented May 5, 2017

@krancour when you use words like ideally and intended and that using chart name and release is a recommendation at the best of times, not a requirement or enforced, then this is not a deficiency in the chart :)

Now, my question was around expected behaviour - Is the answer here that unless I use the recommended partials and naming conventions then Helm will not able to understand it has more than 1 package that owns the same resource within the same namespace?

I do think my proposal(s) above are pretty "okay" for a solution, even if they are hidden behind flags. A package manager that can help resolve issues via a) not messing with files defined by 2 or more packages b) prevents installation of identically named resources c) a combo there off, leaving it up to the end user which to support/use.

At the end of the day, yes, I can fix the charts to not have identically named resources and make sure all charts are rolled out at the same time, etc etc but that was not my point here - I was attempting to bring up a problem which will no doubt surprise others in the future, if the naming conventions are not enforced going forward.

@krancour what else can I do to convince you this is a tooling issue and not a chart deficiency, per se.

@krancour

This comment has been minimized.

Copy link
Contributor

@krancour krancour commented May 5, 2017

@helgi no need to convince me. I wasn't trying to convince you, per se, that the tooling couldn't / shouldn't be better in this regard, but only that there are some established best practices that perhaps were not observed here. The Go programming language isn't at fault for nil pointer dereferences. Your code is. Likewise, Helm charts can be poorly written in a way that will expose undesired behavior.

I agree with what @jchauncey said about improved chart linting possibly being one solution.

@thomastaylor312

This comment has been minimized.

Copy link
Collaborator

@thomastaylor312 thomastaylor312 commented May 6, 2017

@jchauncey and @helgi, do you both think that an implementation of #1495 would cover this? If it is covered, we can probably close this. If not, we can probably create another issue with a specific feature request.

@atombender

This comment has been minimized.

Copy link

@atombender atombender commented May 7, 2017

I think this is major deficiency in Helm, because it means there's no way to ensure that charts are fully encapsulated. Helm aims to be a high-level tool (treat charts as atomic units), but approaches the problem with a fairly low-level mechanism (Kubernetes resources).

We ended up with two tactics to enforce uniqueness. First, we require that all of ours apps accept a name template variable and that the app uses this as the name or prefix for all its resources. So, for example, if an app has a deployment.

kind: Deployment
metadata:
  name: {{ name }}

and if you have several services, you have to rely on the name to prefix:

kind: Service
metadata:
  name: {{ name }}-web

The second part is to use annotations. We require that all apps take two template variables labels and annotations that must be injected into metadata.labels and metadata.annotations respectively. The labels are important for resource targeting (e.g. services), obviously. Annotations let us track what resources belong to an app, as well as things like git versions and who deployed things.

Of course, manually specifying all of this stuff would be a nightmare, so we wrap Helm using a small deployment tool specific to our company. One benefit is that we can deploy multiple concurrent versions (e.g. for A/B testing, blue/green deployment or release canaries) of an app and know that releases don't tread on each others' toes.

@fejta-bot

This comment has been minimized.

Copy link

@fejta-bot fejta-bot commented Dec 24, 2017

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale

@fejta-bot

This comment has been minimized.

Copy link

@fejta-bot fejta-bot commented Jan 23, 2018

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle rotten
/remove-lifecycle stale

@fejta-bot

This comment has been minimized.

Copy link

@fejta-bot fejta-bot commented Feb 22, 2018

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@helgi

This comment has been minimized.

Copy link
Contributor Author

@helgi helgi commented Mar 22, 2018

/reopen

@k8s-ci-robot k8s-ci-robot reopened this Mar 22, 2018
@thomastaylor312

This comment has been minimized.

Copy link
Collaborator

@thomastaylor312 thomastaylor312 commented Mar 23, 2018

@helgi This is labeled as a question and should probably be closed. Should this be relabeled and kept open?

@helgi

This comment has been minimized.

Copy link
Contributor Author

@helgi helgi commented Mar 23, 2018

@thomastaylor312 I do not consider this a question but rather a problem/bug with how Helm lets multiple charts claim ownership over the same resource.

@thomastaylor312

This comment has been minimized.

Copy link
Collaborator

@thomastaylor312 thomastaylor312 commented Apr 2, 2018

Ok, I will relabel this.

@Jamstah

This comment has been minimized.

Copy link

@Jamstah Jamstah commented Nov 26, 2019

What I would really like to see here is a model for shared ownership. Sometimes there really is a need for a helm chart to install a cluster-level resource of a given name, one and only one. In those cases, the chart cannot then also be installed twice.

If this resource could somehow be tagged as "shared" between two releases, and to fail the second install if it tries to modify the resource, and to only actually delete it when there are no shared owners left, that would be awesome.

I get that this doesn't fit the helm model. How should this problem be addressed in a helm world?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
8 participants
You can’t perform that action at this time.