What happened?
Hi team,
We observed that when using helm upgrade --wait --timeout 15m0s flag, the --wait flag would not get the timeout 15 minutes value but using the default 5 minutes flag. And it produce failure below.
NAME: our-service-name
LAST DEPLOYED: Tue Jan 6 14:27:27 2026
NAMESPACE: namespace-name
STATUS: failed
REVISION: 301
DESCRIPTION: Upgrade "our-service-name" failed: context canceled
Please refer to logs below, the time before the context get canceled is =~ 5 minutes:
upgrade.go:447: 2026-01-06 14:59:57.707335024 +0000 UTC m=+5.097713373 [debug] waiting for release "our-service-name" resources (created: 0 updated: 7 deleted: 0)
wait.go:50: 2026-01-06 14:59:57.707343893 +0000 UTC m=+5.097722234 [debug] beginning wait for 7 resources with timeout of 15m0s
ready.go:303: 2026-01-06 14:59:58.044701646 +0000 UTC m=+5.435080000 [debug] Deployment is not ready: namespace/our-service-name . 0 out of 2 expected pods are ready
...REDACTED LOGS...
ready.go:303: 2026-01-06 15:04:49.830540947 +0000 UTC m=+297.220919301 [debug] Deployment is not ready: namespace/our-service-name. 1 out of 2 expected pods are ready
<logs stop generating and context canceled>
The pods are still able to run until running state. After pods is running, we re-run the same pipeline, the status changed from failed to deployed without waiting for deployment.
We don't have such issue while using v3.15.4
What did you expect to happen?
--wait would get --timeout 15m0s and wait until the set time duration before failure.
How can we reproduce it (as minimally and precisely as possible)?
You may try to create an small application that wait at least 6 minutes to startup in kubernetes.
Helm version
Details
version.BuildInfo{Version:"v4.0.4", GitCommit:"8650e1dad9e6ae38b41f60b712af9218a0d8cc11", GitTreeState:"clean", GoVersion:"go1.25.5", KubeClientVersion:"v1.34"}
Kubernetes version
Details
Client Version: v1.35.0
Kustomize Version: v5.7.1
Server Version: v1.34.1
What happened?
Hi team,
We observed that when using
helm upgrade --wait --timeout 15m0sflag, the--waitflag would not get the timeout 15 minutes value but using the default5 minutesflag. And it produce failure below.Please refer to logs below, the time before the context get canceled is =~ 5 minutes:
The pods are still able to run until
runningstate. After pods isrunning, we re-run the same pipeline, the status changed fromfailedtodeployedwithout waiting for deployment.We don't have such issue while using
v3.15.4What did you expect to happen?
--waitwould get--timeout 15m0sand wait until the set time duration before failure.How can we reproduce it (as minimally and precisely as possible)?
You may try to create an small application that wait at least 6 minutes to startup in kubernetes.
Helm version
Details
version.BuildInfo{Version:"v4.0.4", GitCommit:"8650e1dad9e6ae38b41f60b712af9218a0d8cc11", GitTreeState:"clean", GoVersion:"go1.25.5", KubeClientVersion:"v1.34"}Kubernetes version
Details