Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

--wait does not work as expected with helm version 2.2.0 install/upgrade/rollback #2006

Closed
payal-hshah opened this issue Feb 22, 2017 · 10 comments · Fixed by #2016
Closed

--wait does not work as expected with helm version 2.2.0 install/upgrade/rollback #2006

payal-hshah opened this issue Feb 22, 2017 · 10 comments · Fixed by #2016
Assignees
Milestone

Comments

@payal-hshah
Copy link

It is expected for helm install/upgrade/rollback to wait before coming back to make sure all the pods are up and running for a given release.. But, seems like it does not wait and comes back right away .. Logging an issue after conversation with @technosophos

@payal-hshah payal-hshah changed the title --wait does not work as expected with helm install/upgrade/rollback --wait does not work as expected with helm version 2.2.0 install/upgrade/rollback Feb 22, 2017
@thomastaylor312
Copy link
Contributor

I am going to take a look at this. @payal-hshah was this with every chart you installed or was it only with a specific one?

@thomastaylor312 thomastaylor312 self-assigned this Feb 22, 2017
@thomastaylor312
Copy link
Contributor

And just to be thorough, are you running tiller at version 2.2.0? And what version of kubernetes are you using?

@payal-hshah
Copy link
Author

payal-hshah commented Feb 22, 2017

@thomastaylor312 I have tried it with one chart as of now and also I have tried running both helm install and helm upgrade .. Seeing the same behavior .. Also, my tiller is upgraded to use v2.2.0

helm version

Client: &version.Version{SemVer:"v2.2.0", GitCommit:"fc315ab59850ddd1b9b4959c89ef008fef5cdf89", GitTreeState:"clean"}

Server: &version.Version{SemVer:"v2.2.0", GitCommit:"fc315ab59850ddd1b9b4959c89ef008fef5cdf89", GitTreeState:"clean"}

Note: I have 5 containers running inside the pod

Let me know if you need more detail..

@thomastaylor312
Copy link
Contributor

thomastaylor312 commented Feb 22, 2017

Looks like this is related to when we added support for TPRs in Helm. We build every resource.Info type as*runtime.Unstructured, which breaks type casting used in the --wait flag and with --recreate-pods. /cc @adamreese

adamreese added a commit to adamreese/helm that referenced this issue Feb 23, 2017
@technosophos technosophos added this to the 2.2.1 milestone Feb 23, 2017
@technosophos
Copy link
Member

On the community meeting today, it was requested that I drop this on the 2.2.1 release.

adamreese added a commit to adamreese/helm that referenced this issue Feb 24, 2017
technosophos pushed a commit that referenced this issue Feb 24, 2017
larryrensing pushed a commit to larryrensing/helm that referenced this issue Feb 28, 2017
@skevy
Copy link

skevy commented Mar 1, 2017

@thomastaylor312 This issue seems to still be happening on 2.2.1.

❯ helm version
Client: &version.Version{SemVer:"v2.2.1", GitCommit:"db531fd75fb2a1fb0841a98d9e55c58c21f70f4c", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.2.1", GitCommit:"db531fd75fb2a1fb0841a98d9e55c58c21f70f4c", GitTreeState:"clean"}

When I run helm upgrade, it exits immediately with output saying "Available" == 0.

❯ helm upgrade docs-staging [path to chart] --install -f [my values.yml] --namespace [my namespace] --wait --timeout 600
....
==> extensions/v1beta1/Deployment
NAME          DESIRED  CURRENT  UP-TO-DATE  AVAILABLE  AGE
docs-staging  1        1        1           0          3s
....

Any ideas? Am I doing something wrong?

@thomastaylor312
Copy link
Contributor

@skevy I have been using it multiple times now without a problem, so I am not quite sure what is happening. Your pods all have readiness probes, correct?

@lightsaway
Copy link

lightsaway commented Apr 12, 2017

It seems that issue is back again with:

Client: &version.Version{SemVer:"v2.3.0", GitCommit:"2342275b61e0539d259590975cfb78f23afcc1e3", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.3.0", GitCommit:"d83c245fc324117885ed83afc90ac74afed271b4", GitTreeState:"clean"}

Our pipelines started to break , when we upgraded, since test starts after helm install ... --wait
and pod is not ready yet

@thomastaylor312
Copy link
Contributor

@lightsaway Just checking, the pods have a readiness probe, right?

@lightsaway
Copy link

lightsaway commented Apr 12, 2017

@thomastaylor312 yep, readyness probe is there

MarioCarrilloA pushed a commit to MarioCarrilloA/config that referenced this issue Jun 10, 2019
A new command "system application-update" is introduced in this
commit to support updating an applied application to a new version
with a new versioned app tarfile.

The application update leverages the existing application upload
workflow to first validating/uploading the new app tarfile, then
invokes Armada apply or rollback to deploy the charts for the new
versioned application. If the version has ever applied before,
Armada rollback will be performed, otherwise, Armada apply will be
performed.

After apply/rollback to the new version is done, the files for the
old application version will be cleaned up as well as the releases
which are not in the new application version. Once the update is
completed successfully, the status will be set to "applied" so that
user can continue applying app with user overrides.

If there has any failure during updating, application recover will be
triggered to recover the app to the old version. If application recover
fails, the application status will be populated to "apply-failed" so
that user can re-apply app.

In order to use Armada rollback, a new sysinv table "kube_app_releases"
is created to record deployed helm releases versions. After each app
apply, if any helm release version changed, the corresponding release
needs to be updated in sysinv db as well.

The application overrides have been changed to tie to a specific
application in commit https://review.opendev.org/#/c/660498/. Therefore,
the user overrides is preserved when updating.

Note: On the AIO-SX, always use Armada apply even it was applied issue
      on AIO-SX(replicas is 1) to leverage rollback, Armada/helm
      rollback --wait does not wait for pods to be ready before it
      returns.
      Related helm issue,
      helm/helm#4210
      helm/helm#2006

Tests conducted(AIO-SX, DX, Standard):
  - functional tests (both stx-openstack and simple custom app)
    - upload stx-openstack-1.0-13-centos-stable-latest tarfile
      which uses latest docker images
    - apply stx-openstack
    - update to stx-openstack-1.0-13-centos-stable-versioned
      which uses versioned docker images
    - update back to stx-openstack-1.0-13-centos-stable-latest
    - update to a version that has less/more charts compared to
      the old version
    - remove stx-openstack
    - delete stx-openstack
  - failure tests
    - application-update rejected
      (app not found, update to a same version,
       operation not permitted etc...)
    - application-update fails that trigger recover
      - upload failure
        ie. invalid tarfile, manifest file validation failed ...
      - apply/rollback failure
        ie. download images failure, Armada apply/rollback fails

Change-Id: I4e094427e673639e2bdafd8c476b897b7b4327a3
Story: 2005350
Task: 33568
Signed-off-by: Angie Wang <angie.wang@windriver.com>
slittle1 pushed a commit to starlingx-staging/openstack-armada-app-test that referenced this issue Sep 3, 2019
A new command "system application-update" is introduced in this
commit to support updating an applied application to a new version
with a new versioned app tarfile.

The application update leverages the existing application upload
workflow to first validating/uploading the new app tarfile, then
invokes Armada apply or rollback to deploy the charts for the new
versioned application. If the version has ever applied before,
Armada rollback will be performed, otherwise, Armada apply will be
performed.

After apply/rollback to the new version is done, the files for the
old application version will be cleaned up as well as the releases
which are not in the new application version. Once the update is
completed successfully, the status will be set to "applied" so that
user can continue applying app with user overrides.

If there has any failure during updating, application recover will be
triggered to recover the app to the old version. If application recover
fails, the application status will be populated to "apply-failed" so
that user can re-apply app.

In order to use Armada rollback, a new sysinv table "kube_app_releases"
is created to record deployed helm releases versions. After each app
apply, if any helm release version changed, the corresponding release
needs to be updated in sysinv db as well.

The application overrides have been changed to tie to a specific
application in commit https://review.opendev.org/#/c/660498/. Therefore,
the user overrides is preserved when updating.

Note: On the AIO-SX, always use Armada apply even it was applied issue
      on AIO-SX(replicas is 1) to leverage rollback, Armada/helm
      rollback --wait does not wait for pods to be ready before it
      returns.
      Related helm issue,
      helm/helm#4210
      helm/helm#2006

Tests conducted(AIO-SX, DX, Standard):
  - functional tests (both stx-openstack and simple custom app)
    - upload stx-openstack-1.0-13-centos-stable-latest tarfile
      which uses latest docker images
    - apply stx-openstack
    - update to stx-openstack-1.0-13-centos-stable-versioned
      which uses versioned docker images
    - update back to stx-openstack-1.0-13-centos-stable-latest
    - update to a version that has less/more charts compared to
      the old version
    - remove stx-openstack
    - delete stx-openstack
  - failure tests
    - application-update rejected
      (app not found, update to a same version,
       operation not permitted etc...)
    - application-update fails that trigger recover
      - upload failure
        ie. invalid tarfile, manifest file validation failed ...
      - apply/rollback failure
        ie. download images failure, Armada apply/rollback fails

Change-Id: I4e094427e673639e2bdafd8c476b897b7b4327a3
Story: 2005350
Task: 33568
Signed-off-by: Angie Wang <angie.wang@windriver.com>
slittle1 pushed a commit to starlingx-staging/utilities2 that referenced this issue Sep 9, 2019
A new command "system application-update" is introduced in this
commit to support updating an applied application to a new version
with a new versioned app tarfile.

The application update leverages the existing application upload
workflow to first validating/uploading the new app tarfile, then
invokes Armada apply or rollback to deploy the charts for the new
versioned application. If the version has ever applied before,
Armada rollback will be performed, otherwise, Armada apply will be
performed.

After apply/rollback to the new version is done, the files for the
old application version will be cleaned up as well as the releases
which are not in the new application version. Once the update is
completed successfully, the status will be set to "applied" so that
user can continue applying app with user overrides.

If there has any failure during updating, application recover will be
triggered to recover the app to the old version. If application recover
fails, the application status will be populated to "apply-failed" so
that user can re-apply app.

In order to use Armada rollback, a new sysinv table "kube_app_releases"
is created to record deployed helm releases versions. After each app
apply, if any helm release version changed, the corresponding release
needs to be updated in sysinv db as well.

The application overrides have been changed to tie to a specific
application in commit https://review.opendev.org/#/c/660498/. Therefore,
the user overrides is preserved when updating.

Note: On the AIO-SX, always use Armada apply even it was applied issue
      on AIO-SX(replicas is 1) to leverage rollback, Armada/helm
      rollback --wait does not wait for pods to be ready before it
      returns.
      Related helm issue,
      helm/helm#4210
      helm/helm#2006

Tests conducted(AIO-SX, DX, Standard):
  - functional tests (both stx-openstack and simple custom app)
    - upload stx-openstack-1.0-13-centos-stable-latest tarfile
      which uses latest docker images
    - apply stx-openstack
    - update to stx-openstack-1.0-13-centos-stable-versioned
      which uses versioned docker images
    - update back to stx-openstack-1.0-13-centos-stable-latest
    - update to a version that has less/more charts compared to
      the old version
    - remove stx-openstack
    - delete stx-openstack
  - failure tests
    - application-update rejected
      (app not found, update to a same version,
       operation not permitted etc...)
    - application-update fails that trigger recover
      - upload failure
        ie. invalid tarfile, manifest file validation failed ...
      - apply/rollback failure
        ie. download images failure, Armada apply/rollback fails

Change-Id: I4e094427e673639e2bdafd8c476b897b7b4327a3
Story: 2005350
Task: 33568
Signed-off-by: Angie Wang <angie.wang@windriver.com>
MichaelMorrisEst pushed a commit to Nordix/helm that referenced this issue Nov 17, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants