You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have 2 Pulumi projects who's responsibilities are as follows.
Project 1 creates Azure K8 clusters based on some basic config describing those to create. I use azure.containerservice.KubernetesCluster to create these in Azure. The raw kubeconfig of each created cluster is made available as an array output for the project so that other projects can access the clusters.
Project 2 reads Project 1's outputs via a StackReference and deploys several Helm charts to each cluster using a k8s.Provider.
Problem Scenario
Run pulumi up on project 1 and then project 2. This creates the N clusters as defined in project 1 config, and then applies the helm charts defined in project 2 to each cluster.
Edit Project 1 config to remove one of the previous clusters.
Run pulumi up on project 1. All good so far. Cluster count now N-1.
Run pulumi up or pulumi refresh on project 2 and the kubernetes resources previously created for the now deleted cluster cause errors due to the host no longer existing.
I realise I can do a pulumi state delete <urn> on project 2 to clean up the resources that no longer exist (due to the cluster no longer existing) but a lot of these Helm charts produce significant numbers of k8 resources so would be time consuming to resolve that way. Maybe a --delete-children switch when deleting a parent urn would help.
Or, is there another way I'm missing to resolve this.
The text was updated successfully, but these errors were encountered:
@markphillips100 I think your best bet here would be editing the stack manually to remove any k8s resources that no longer exist (because the underlying cluster was deleted).
Export the stack with pulumi stack export > stack
Edit the stack file with your editor of choice, e.g. vim stack, and delete the relevant resources from .deployment.resources.
Import the stack file with pulumi stack import --file stack
Closing this out since the underlying issue is already documented in 416.
Problem description
I have 2 Pulumi projects who's responsibilities are as follows.
Project 1 creates Azure K8 clusters based on some basic config describing those to create. I use
azure.containerservice.KubernetesCluster
to create these in Azure. The raw kubeconfig of each created cluster is made available as an array output for the project so that other projects can access the clusters.Project 2 reads Project 1's outputs via a StackReference and deploys several Helm charts to each cluster using a k8s.Provider.
Problem Scenario
Errors & Logs
An example error:
kubernetes:apps:Deployment (eziaustraliasou-19566c14-acs-helloworld-eziaustraliasou-19566c14-sample-app1):
error: Preview failed: Get https://eziaustraliasou-19566c14-3cae24e8.hcp.australiasoutheast.azmk8s.io:443/api?timeout=32s: dial tcp: lookup eziaustraliasou-19566c14-3cae24e8.hcp.australiasoutheast.azmk8s.io: no such host
Affected product version(s)
Pulumi 1.3.1
Typescript project dependencies:
@pulumi/azure@1.1.0
@pulumi/kubernetes@1.2.0
Suggestions for a fix
I realise I can do a
pulumi state delete <urn>
on project 2 to clean up the resources that no longer exist (due to the cluster no longer existing) but a lot of these Helm charts produce significant numbers of k8 resources so would be time consuming to resolve that way. Maybe a--delete-children
switch when deleting a parent urn would help.Or, is there another way I'm missing to resolve this.
The text was updated successfully, but these errors were encountered: