-
Notifications
You must be signed in to change notification settings - Fork 113
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Waiting for app ReplicaSet be marked available indefinitely #1978
Comments
Unsure why, but after updating my pulumi-kubernetes provider to 3.18.3 (from 3.18.2) it proceeded to work. |
This is still happening to us. Not sure what triggers it yet. |
This keeps happening to us and we haven't quite figured out what triggers it. I think it may be related to performing a refresh before doing |
Hey there! Thanks for the insight into Is there anything you do to 'reset' pulumi so the |
any update? |
I'm having the same problem with Pulumi.Kubernetes 3.23.1. (c#, .net 7 project) Any ideas/updates? |
It's been a while since I looked at this but I believe after a lot of debugging we found this happens if something in the cluster is modifying the pod spec after deployment. This seems to screw up Pulumi's ability to perform an upgrade if the spec doesn't match what it last applied. It should be easy to reproduce this by manually modifying and seeing if Pulumi can update it. |
We fixed some bugs related to refresh in #2445 so I'm hoping this is fixed in v4. I'm going to close as resolved, but please let us know if you're still seeing the error after upgrading. |
hey @lblackstone @jsravn, I'm running into this very reliably and I think your assessment
Is likely accurate. I'm on GCP GKE using Autopilot mode, and I'm running into this only when I modify my deployment's podspec resource limits. I can modify resource requests no problem, but resource limits reliably causes the operation to hang. I presume this has something to do with autopilot magic around resource limits causing the podspec modification you mentioned |
Thanks for the updated info. I opened #2662 to track the bug with the new info you provided. |
What happened?
A completely healthy deployment gets stuck on "Waiting for app ReplicaSet be marked available" despite all replicas being fully available.
Steps to reproduce
It's not clear yet to me how to reproduce reliably. It seems to happen sporadically. Also I see there are quite a few issues already that have the same problem - I guess it's not completely fixed?
Expected Behavior
It should notice the deployment is healthy and proceed.
Actual Behavior
ReplicaSet status:
Deployment status:
Versions used
Additional context
Tried running a pulumi refresh beforehand, but it made no difference. Kubernetes is version 1.22.
Contributing
Vote on this issue by adding a 👍 reaction.
To contribute a fix for this issue, leave a comment (and link to your pull request, if you've opened one already).
The text was updated successfully, but these errors were encountered: