Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Gracefully handle deletion of first-class provider cluster #416

Closed
lblackstone opened this issue Feb 8, 2019 · 7 comments
Closed

Gracefully handle deletion of first-class provider cluster #416

lblackstone opened this issue Feb 8, 2019 · 7 comments
Assignees
Labels
area/providers area/resource-management Issues related to Kubernetes resource provisioning, management, await logic, and semantics generally kind/bug Some behavior is incorrect or out of spec resolution/fixed This issue was fixed

Comments

@lblackstone
Copy link
Member

lblackstone commented Feb 8, 2019

If a user provisions a k8s cluster in one stack and then uses it as a provider in a separate stack, an implicit dependency is created. If the user destroys the k8s stack first, the destroy operation will fail on the dependent stack because the provider is unable to talk to the cluster.

We should either detect the cross-stack dependency, or detect that the cluster was deleted to avoid this situation.


Current workaround for cleaning up a stack in this state is the following:

  1. Export the stack with pulumi stack export > stack
  2. Edit the stack file with your editor of choice, e.g. vim stack, and delete the relevant resources from .deployment.resources.
  3. Import the stack file with pulumi stack import --file stack

Related: #881 #491

@hausdorff hausdorff added area/providers kind/bug Some behavior is incorrect or out of spec area/resource-management Issues related to Kubernetes resource provisioning, management, await logic, and semantics generally labels Mar 4, 2019
@hausdorff
Copy link
Contributor

We can't infer automatically that the cluster has been deleted (vs. just being transiently unavailable), which I think probably means we need a way of signaling through the stack ref boundary metadata about the referenced resource. We'd probably want to design this to comport with @metral's work.

@lblackstone @metral @lukehoban do you think we should consider this problem (though perhaps not the solution/direction I propose) for M22?

@metral
Copy link
Contributor

metral commented Mar 4, 2019

Signaling through the stack ref about resource updates sounds like the way to go.

I'm leaning towards M22 if it lines up with the rest of the priorities slated.

@metral
Copy link
Contributor

metral commented Mar 6, 2019

@pgavlin IIUC the resource import work you're focusing on could apply here as another means to access other Pulumi program resources, is that correct?

If so, what does this mean for programs that use StackReferences, as their lack of update info for the depending reference will still be a problem? My gut is telling me in this case we'd still need a "signaling" mechanism.

@pgavlin
Copy link
Member

pgavlin commented Mar 6, 2019

@pgavlin IIUC the resource import work you're focusing on could apply here as another means to access other Pulumi program resources, is that correct?

I don't think that the resource import work will affect this case. The import work is focused around allowing externally-created resources to be adopted by Pulumi.

@lblackstone
Copy link
Member Author

After encountering this issue a few more times, I think it might be better to mark these resources as deleted on the next update. While it's possible that this could cause erroneous results if the cluster is only temporarily unavailable, this should be noticeable during preview.

Thoughts @lukehoban @pgavlin?

@pgavlin
Copy link
Member

pgavlin commented Dec 17, 2019

I agree. I have some preliminary changes to enable this behavior; I'll push them to a branch and send them out as a draft PR.

@lblackstone
Copy link
Member Author

This is addressed by #2489

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/providers area/resource-management Issues related to Kubernetes resource provisioning, management, await logic, and semantics generally kind/bug Some behavior is incorrect or out of spec resolution/fixed This issue was fixed
Projects
None yet
Development

No branches or pull requests

4 participants