-
Notifications
You must be signed in to change notification settings - Fork 7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
--wait does not work as expected with helm version 2.2.0 install/upgrade/rollback #2006
Comments
I am going to take a look at this. @payal-hshah was this with every chart you installed or was it only with a specific one? |
And just to be thorough, are you running tiller at version 2.2.0? And what version of kubernetes are you using? |
@thomastaylor312 I have tried it with one chart as of now and also I have tried running both
Note: I have 5 containers running inside the pod Let me know if you need more detail.. |
Looks like this is related to when we added support for TPRs in Helm. We build every |
On the community meeting today, it was requested that I drop this on the 2.2.1 release. |
@thomastaylor312 This issue seems to still be happening on 2.2.1. ❯ helm version
Client: &version.Version{SemVer:"v2.2.1", GitCommit:"db531fd75fb2a1fb0841a98d9e55c58c21f70f4c", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.2.1", GitCommit:"db531fd75fb2a1fb0841a98d9e55c58c21f70f4c", GitTreeState:"clean"} When I run ❯ helm upgrade docs-staging [path to chart] --install -f [my values.yml] --namespace [my namespace] --wait --timeout 600
....
==> extensions/v1beta1/Deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
docs-staging 1 1 1 0 3s
.... Any ideas? Am I doing something wrong? |
@skevy I have been using it multiple times now without a problem, so I am not quite sure what is happening. Your pods all have readiness probes, correct? |
It seems that issue is back again with:
Our pipelines started to break , when we upgraded, since test starts after |
@lightsaway Just checking, the pods have a readiness probe, right? |
@thomastaylor312 yep, readyness probe is there |
A new command "system application-update" is introduced in this commit to support updating an applied application to a new version with a new versioned app tarfile. The application update leverages the existing application upload workflow to first validating/uploading the new app tarfile, then invokes Armada apply or rollback to deploy the charts for the new versioned application. If the version has ever applied before, Armada rollback will be performed, otherwise, Armada apply will be performed. After apply/rollback to the new version is done, the files for the old application version will be cleaned up as well as the releases which are not in the new application version. Once the update is completed successfully, the status will be set to "applied" so that user can continue applying app with user overrides. If there has any failure during updating, application recover will be triggered to recover the app to the old version. If application recover fails, the application status will be populated to "apply-failed" so that user can re-apply app. In order to use Armada rollback, a new sysinv table "kube_app_releases" is created to record deployed helm releases versions. After each app apply, if any helm release version changed, the corresponding release needs to be updated in sysinv db as well. The application overrides have been changed to tie to a specific application in commit https://review.opendev.org/#/c/660498/. Therefore, the user overrides is preserved when updating. Note: On the AIO-SX, always use Armada apply even it was applied issue on AIO-SX(replicas is 1) to leverage rollback, Armada/helm rollback --wait does not wait for pods to be ready before it returns. Related helm issue, helm/helm#4210 helm/helm#2006 Tests conducted(AIO-SX, DX, Standard): - functional tests (both stx-openstack and simple custom app) - upload stx-openstack-1.0-13-centos-stable-latest tarfile which uses latest docker images - apply stx-openstack - update to stx-openstack-1.0-13-centos-stable-versioned which uses versioned docker images - update back to stx-openstack-1.0-13-centos-stable-latest - update to a version that has less/more charts compared to the old version - remove stx-openstack - delete stx-openstack - failure tests - application-update rejected (app not found, update to a same version, operation not permitted etc...) - application-update fails that trigger recover - upload failure ie. invalid tarfile, manifest file validation failed ... - apply/rollback failure ie. download images failure, Armada apply/rollback fails Change-Id: I4e094427e673639e2bdafd8c476b897b7b4327a3 Story: 2005350 Task: 33568 Signed-off-by: Angie Wang <angie.wang@windriver.com>
A new command "system application-update" is introduced in this commit to support updating an applied application to a new version with a new versioned app tarfile. The application update leverages the existing application upload workflow to first validating/uploading the new app tarfile, then invokes Armada apply or rollback to deploy the charts for the new versioned application. If the version has ever applied before, Armada rollback will be performed, otherwise, Armada apply will be performed. After apply/rollback to the new version is done, the files for the old application version will be cleaned up as well as the releases which are not in the new application version. Once the update is completed successfully, the status will be set to "applied" so that user can continue applying app with user overrides. If there has any failure during updating, application recover will be triggered to recover the app to the old version. If application recover fails, the application status will be populated to "apply-failed" so that user can re-apply app. In order to use Armada rollback, a new sysinv table "kube_app_releases" is created to record deployed helm releases versions. After each app apply, if any helm release version changed, the corresponding release needs to be updated in sysinv db as well. The application overrides have been changed to tie to a specific application in commit https://review.opendev.org/#/c/660498/. Therefore, the user overrides is preserved when updating. Note: On the AIO-SX, always use Armada apply even it was applied issue on AIO-SX(replicas is 1) to leverage rollback, Armada/helm rollback --wait does not wait for pods to be ready before it returns. Related helm issue, helm/helm#4210 helm/helm#2006 Tests conducted(AIO-SX, DX, Standard): - functional tests (both stx-openstack and simple custom app) - upload stx-openstack-1.0-13-centos-stable-latest tarfile which uses latest docker images - apply stx-openstack - update to stx-openstack-1.0-13-centos-stable-versioned which uses versioned docker images - update back to stx-openstack-1.0-13-centos-stable-latest - update to a version that has less/more charts compared to the old version - remove stx-openstack - delete stx-openstack - failure tests - application-update rejected (app not found, update to a same version, operation not permitted etc...) - application-update fails that trigger recover - upload failure ie. invalid tarfile, manifest file validation failed ... - apply/rollback failure ie. download images failure, Armada apply/rollback fails Change-Id: I4e094427e673639e2bdafd8c476b897b7b4327a3 Story: 2005350 Task: 33568 Signed-off-by: Angie Wang <angie.wang@windriver.com>
A new command "system application-update" is introduced in this commit to support updating an applied application to a new version with a new versioned app tarfile. The application update leverages the existing application upload workflow to first validating/uploading the new app tarfile, then invokes Armada apply or rollback to deploy the charts for the new versioned application. If the version has ever applied before, Armada rollback will be performed, otherwise, Armada apply will be performed. After apply/rollback to the new version is done, the files for the old application version will be cleaned up as well as the releases which are not in the new application version. Once the update is completed successfully, the status will be set to "applied" so that user can continue applying app with user overrides. If there has any failure during updating, application recover will be triggered to recover the app to the old version. If application recover fails, the application status will be populated to "apply-failed" so that user can re-apply app. In order to use Armada rollback, a new sysinv table "kube_app_releases" is created to record deployed helm releases versions. After each app apply, if any helm release version changed, the corresponding release needs to be updated in sysinv db as well. The application overrides have been changed to tie to a specific application in commit https://review.opendev.org/#/c/660498/. Therefore, the user overrides is preserved when updating. Note: On the AIO-SX, always use Armada apply even it was applied issue on AIO-SX(replicas is 1) to leverage rollback, Armada/helm rollback --wait does not wait for pods to be ready before it returns. Related helm issue, helm/helm#4210 helm/helm#2006 Tests conducted(AIO-SX, DX, Standard): - functional tests (both stx-openstack and simple custom app) - upload stx-openstack-1.0-13-centos-stable-latest tarfile which uses latest docker images - apply stx-openstack - update to stx-openstack-1.0-13-centos-stable-versioned which uses versioned docker images - update back to stx-openstack-1.0-13-centos-stable-latest - update to a version that has less/more charts compared to the old version - remove stx-openstack - delete stx-openstack - failure tests - application-update rejected (app not found, update to a same version, operation not permitted etc...) - application-update fails that trigger recover - upload failure ie. invalid tarfile, manifest file validation failed ... - apply/rollback failure ie. download images failure, Armada apply/rollback fails Change-Id: I4e094427e673639e2bdafd8c476b897b7b4327a3 Story: 2005350 Task: 33568 Signed-off-by: Angie Wang <angie.wang@windriver.com>
It is expected for
helm install/upgrade/rollback
to wait before coming back to make sure all the pods are up and running for a given release.. But, seems like it does not wait and comes back right away .. Logging an issue after conversation with @technosophosThe text was updated successfully, but these errors were encountered: