-
Notifications
You must be signed in to change notification settings - Fork 4.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Rolling update panic with warm pools #13758
Comments
We're still seeing this quite frequently. |
Please reproduce this with kops 1.24.1. It should give more information about why the detach fails that may cause this. |
Noted. We're currently still using 1.23.2. |
Still seeing this with kOps 1.24.1. Here are some logs:
|
Do you happen to have an ASG with warm pools enabled and being scaled to 0? In that case, I think kops/pkg/instancegroups/instancegroups.go Line 155 in 9ed92a9
update[0 - 1 - 0] .
|
Yep. This will cause a panic. Reproduced! |
/kind bug
1. What
kops
version are you running? The commandkops version
, will displaythis information.
2. What Kubernetes version are you running?
kubectl version
will print theversion if a cluster is running or provide the Kubernetes version specified as
a
kops
flag.3. What cloud provider are you using?
4. What commands did you run? What is the simplest way to reproduce this issue?
kops rolling-update cluster --yes
. We were also upgrading from kOps 1.22.4 to 1.23.2, in case that's relevant.5. What happened after the commands executed?
Panic.
6. What did you expect to happen?
Smooth rolling update.
7. Please provide your cluster manifest. Execute
kops get --name my.example.com -o yaml
to display your cluster manifest.You may want to remove your cluster name and other sensitive information.
8. Please run the commands with most verbose logging by adding the
-v 10
flag.Paste the logs into this report, or in a gist and provide the gist link here.
9. Anything else do we need to know?
Similar to #11774
The text was updated successfully, but these errors were encountered: