-
Notifications
You must be signed in to change notification settings - Fork 113
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cannot destroy dangling state after the EKS cluster is deleted #2517
Comments
@kumarshantanu Thanks for reporting this issue and sorry you are facing it. I'm unable to reproduce this issue, and I suspect that this might be an issue within Also, are you able to list what types of Kubernetes resources you're trying to manage with Pulumi? Thanks! |
@rquitales Below is the
|
I notice that you're using pulumi-kubernetes v3.30.1. #2489 improved this workflow, and is part of the v3.30.2 release. Can you upgrade the dependency and try again? |
@kumarshantanu Did you have a chance to try it with a later version? |
@mikhailshilkov I removed my resources using the snippet linked at pulumi/pulumi#2437 (comment) and haven't run into that issue again. Also, I learnt how to delete EKS resources more safely, so didn't get into a similar situation. |
@kumarshantanu Thank you for confirming that you aren't affected by this anymore. I'll go ahead and close the issue, feel free to report if/when you need help again. |
What happened?
I somehow got a Pulumi EKS project into a weird state that won't get deleted with
pulumi destroy
andPULUMI_K8S_DELETE_UNREACHABLE=true pulumi destroy
. I got the following error:There are too many resources to delete manually. Even
PULUMI_K8S_DELETE_UNREACHABLE=true pulumi refresh
got me a similar error:Expected Behavior
Expected the pulumi project state to be destroyed with
pulumi destroy
orPULUMI_K8S_DELETE_UNREACHABLE=true pulumi destroy
.Steps to reproduce
I got into this situation when I deleted the EKS cluster and later tried to delete the resources with
pulumi destroy
andPULUMI_K8S_DELETE_UNREACHABLE=true pulumi destroy
. You can check what happened if you can access https://app.pulumi.com/TheLadders/eks-cluster/qa-blue/previews/2028ec71-f716-45ca-9dff-d3c600401030Output of
pulumi about
CLI
Version 3.76.0
Go Version go1.20.6
Go Compiler gc
Plugins
NAME VERSION
aws 5.31.0
eks 1.0.2
kubernetes 3.30.1
nodejs unknown
tls 4.0.0
Host
OS ubuntu
Version 20.04
Arch x86_64
This project is written in nodejs: executable='/home/shantanu/apps/bin/node' version='v18.16.1'
Current Stack: TheLadders/eks-cluster/qa-blue
TYPE URN
<removed to reduce Github issue size (issue form submission was not working)>
Found no pending operations associated with TheLadders/qa-blue
Backend
Name pulumi.com
URL https://app.pulumi.com/ladders-pulumi
User ladders-pulumi
Organizations ladders-pulumi, TheLadders
Dependencies:
NAME VERSION
@pulumi/aws 5.31.0
@pulumi/eks 1.0.2
@pulumi/kubernetes 3.30.1
@pulumi/pulumi 3.74.0
@pulumi/tls 4.0.0
@types/node 16.11.6
typescript 3.8.3
Pulumi locates its logs in /tmp by default
Additional context
Related conversation (between @scottslowe and myself) on Pulumi Slack:
https://pulumi-community.slack.com/archives/CRFURDVQB/p1689860173703809
Contributing
Vote on this issue by adding a 👍 reaction.
To contribute a fix for this issue, leave a comment (and link to your pull request, if you've opened one already).
The text was updated successfully, but these errors were encountered: