Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve documentation around PULUMI_K8S_DELETE_UNREACHABLE #2463

Closed
jkodroff opened this issue Jun 20, 2023 · 1 comment · Fixed by #2489
Closed

Improve documentation around PULUMI_K8S_DELETE_UNREACHABLE #2463

jkodroff opened this issue Jun 20, 2023 · 1 comment · Fixed by #2489
Assignees
Labels
kind/enhancement Improvements or new features resolution/fixed This issue was fixed
Milestone

Comments

@jkodroff
Copy link
Member

jkodroff commented Jun 20, 2023

Hello!

  • Vote on this issue by adding a 👍 reaction
  • If you want to implement this feature, comment to let us know (we'll work with you on design, scheduling, etc.)

Issue details

When I run pulumi refresh, I get the following error message, which includes how to fix the problem with an env var:

  kubernetes:core/v1:Service (default/wpdev-wordpress):
    warning: configured Kubernetes cluster is unreachable: unable to load schema information from the API server: Get "https://pulumi-rmciy6s0.hcp.westus2.azmk8s.io:443/openapi/v2?timeout=32s": dial tcp: lookup pulumi-rmciy6s0.hcp.westus2.azmk8s.io: no such host
    error: Preview failed: failed to read resource state due to unreachable cluster. If the cluster has been deleted, you can edit the pulumi state to remove this resource or retry with the PULUMI_K8S_DELETE_UNREACHABLE environment variable set to true.

However, when running pulumi destroy, I get this error message which indicates that the only way to fix the issue is to edit the state file (and doesn't even mention the pulumi state command):

  kubernetes:apps/v1:Deployment (default/wpdev-wordpress):
    error: configured Kubernetes cluster is unreachable: unable to load schema information from the API server: Get "https://pulumi-rmciy6s0.hcp.westus2.azmk8s.io:443/openapi/v2?timeout=32s": dial tcp: lookup pulumi-rmciy6s0.hcp.westus2.azmk8s.io: no such host
    If the cluster has been deleted, you can edit the pulumi state to remove this resource

Considering that PULUMI_K8S_DELETE_UNREACHABLE is way easier to use (and prrrrrrobably what most people want?), it would be cool if we could improve the visibility of this option in a few ways:

  1. Make sure the provider config includes the env var option: https://www.pulumi.com/registry/packages/kubernetes/api-docs/provider/#deleteunreachable_nodejs
  2. Make the error messages the same between refresh and destroy.
  3. In the error message, also mention that the configuration can be set directly in the provider declaration for future reference (although I'm not sure if it'll help anyone whose stack has already been updated and the cluster has been subsequently deleted).

Affected area/feature

@jkodroff jkodroff added kind/enhancement Improvements or new features needs-triage Needs attention from the triage team labels Jun 20, 2023
@mikhailshilkov mikhailshilkov added help-wanted We'd love your contributions on this issue area/docs Improvements or additions to documentation and removed needs-triage Needs attention from the triage team labels Jun 20, 2023
@lblackstone
Copy link
Member

lblackstone commented Jul 10, 2023

#2312 fixes item 1.

@lblackstone lblackstone self-assigned this Jul 10, 2023
@lblackstone lblackstone removed help-wanted We'd love your contributions on this issue area/docs Improvements or additions to documentation labels Jul 10, 2023
@lblackstone lblackstone added this to the 0.91 milestone Jul 10, 2023
@pulumi-bot pulumi-bot added the resolution/fixed This issue was fixed label Jul 11, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/enhancement Improvements or new features resolution/fixed This issue was fixed
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants