Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

helm does not support objects with metadata.generateName #3348

Closed
yuvipanda opened this issue Jan 16, 2018 · 13 comments
Closed

helm does not support objects with metadata.generateName #3348

yuvipanda opened this issue Jan 16, 2018 · 13 comments
Labels

Comments

@yuvipanda
Copy link
Contributor

In kubernetes objects, if you set metadata.generateName rather than metadata.name, k8s will generate a unique name for you, and you can figure out the name from the full object returned from the initial create call. I'd expect this to work with helm, but helm does not seem to support this.

To reproduce, create a chart with the following template:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  generateName: some-name-
  name: some-name
  labels:
    app: testcase
    chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
    release: {{ .Release.Name }}
    heritage: {{ .Release.Service }}
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: testcase
        release: {{ .Release.Name }}
    spec:
      containers:
        - name: {{ .Chart.Name }}
          image: gcr.io/google_containers/pause:3.0

If you install this, you'll get:

NAME:   wat12
LAST DEPLOYED: Tue Jan 16 01:21:12 2018
NAMESPACE: default
STATUS: DEPLOYED

RESOURCES:
==> MISSING
KIND           NAME
deployments    

Note that helm considers the deployment to be missing, even though kubectl tells you that the deployment was indeed created. If you try to upgrade this release,

Error: UPGRADE FAILED: Could not get information about the resource: resource name may not be empty

And the deployment object is then sort of 'lost' forever, untracked.

Ideally, helm would use the response from the original create call to figure out the name of the object to track and track that...

@yuvipanda
Copy link
Contributor Author

Deleting the release also fails:

Error: deletion completed with 1 error(s): resource name may not be empty

@bacongobbler
Copy link
Member

bacongobbler commented Jan 16, 2018

labeling as half bug, half feature request.

@yuvipanda
Copy link
Contributor Author

+1, I think the bug is that this should fail with a clearer error explicitly, rather than producing resources that are lost tack of.

Feature would be if it supported generateName based objects properly :)

yuvipanda added a commit to yuvipanda/zero-to-jupyterhub-k8s that referenced this issue Jan 17, 2018
- Allow overriding the pause image used, for cases when
  external docker images are disallowed
- Explicitly set the name of the created daemonset,
  since helm does not support using generateName
  helm/helm#3348
- Set proper labels on the daemonsets
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot added bug Categorizes issue or PR as related to a bug. and removed bug labels Jun 5, 2018
@bacongobbler bacongobbler added kind/feature and removed feature bug Categorizes issue or PR as related to a bug. labels Jun 8, 2018
@puco
Copy link

puco commented Jul 3, 2018

Any activity on this? Or any advice how to deal with Jobs that I need to run during upgrade?

@fuel-wlightning
Copy link

@puco what I did was use {{ .Release.Revision }} on top of the job name. The extra nice part is that it also helps identify which revision on first glance that a migration belongs to.

@nvtkaszpir
Copy link

the same issue affects helm test pods.
So currently using {{ .Release.Revision }} in pod name is advised.

@bacongobbler
Copy link
Member

closing as inactive.

@venkatalolla
Copy link

Is someone looked at this issue and solved it? This has been a problem while using the generateName in Tekton's pipelineRun for me. Not sure how to get around it. As per the @ganto work-around in #9488, I tried using Helm-v3.2.4 and lint works but not the release. I'm still getting the same error while running a helm install. Here is the error, resource name may not be empty

@briantopping
Copy link

helm template name [NAME] [CHART] | kubectl create -f - works as well.

@flmmartins
Copy link

@bacongobbler I am not sure this is inactive
Having to use kubectl is not really an option here. Why helm can't support this?

@chrigifrei
Copy link

Alternate workaround:

metadata:
  name: job-foo-{{ randAlphaNum 8 | lower }}

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests