Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Failing destroy-time provisioner does not abort resource deletion #586

Closed
1 of 4 tasks
syst0m opened this issue Nov 7, 2019 · 2 comments
Closed
1 of 4 tasks

Failing destroy-time provisioner does not abort resource deletion #586

syst0m opened this issue Nov 7, 2019 · 2 comments

Comments

@syst0m
Copy link
Contributor

syst0m commented Nov 7, 2019

I have issues

Created a destroy-time provisioner which looks for LoadBalancer services,
and fails if it finds any.
The idea is to abort the destroy if LoadBalancer services exist, and ensure they have been migrated.
I had to create a null_resource with a destroy-time provisioner, which depends on the eks module.

I'm submitting a...

  • bug report
  • feature request
  • support request - read the FAQ first!
  • kudos, thank you, warm fuzzy

What is the current behavior?

The provisioner runs and exits with status code 1, but the destroy does not abort
and the cluster gets deleted.

If this is a bug, how to reproduce? Please include a code sample if relevant.

  depends_on = [module.eks.cluster_endpoint]

  provisioner "local-exec" {
    when = "destroy"
    interpreter = ["bash", "-c"]
    command = <<EOF
kubectl \
  --kubeconfig <(echo "${module.eks.kubeconfig}") \
  get svc \
  | grep LoadBalancer

if [ $? -eq 1 ]
then
  kubectl \
    --kubeconfig <(echo "${module.eks.kubeconfig}") \
    delete \
    --all \
    namespaces
else
  echo "PUBLIC LOAD BALANCERS DETECTED. QUITTING..."; exit 1
fi
EOF
  }
}

The provisioner runs and fails, but destroy proceeds and all clsuter resources are deleted:

module.habito-eks-forrest.null_resource.cluster_destroy (local-exec): Executing: ["bash" "-c" "kubectl \\\n  --kubeconfig <(echo \"apiVersion: v1\npreferences: {}\nkind: Config\n\nclusters:\n- cluster:\n    server: https://XXXX=\n  name: eks_forrest\n\ncontexts:\n- context:\n    cluster: eks_forrest\n    user: eks_forrest\n  name: eks_forrest\n\ncurrent-context: eks_forrest\n\nusers:\n- name: eks_forrest\n  user:\n    exec:\n      apiVersion: client.authentication.k8s.io/v1alpha1\n      command: aws-iam-authenticator\n      args:\n        - \"token\"\n        - \"-i\"\n        - \"forrest\"\n        - -r\n        - arn:aws:iam::XXX:role/Administrator\n\n\") \\\n    delete \\\n    --all \\\n    namespaces\nelse\n  echo \"PUBLIC LOAD BALANCERS DETECTED. QUITTING...\"; exit 1\nfi\n"]

What's the expected behavior?

When the provisioner fails, destroy of the cluster is aborted.

Are you able to fix this problem and submit a PR? Link here if you have already.

I submitted a PR which adds count to all resources:
#580

But, there would be issues with migration of resources.

Environment details

  • Affected module version: 6.0.2
  • OS: Ubuntu 18.04.3 LTS
  • Terraform version: v0.12.10

Any other relevant info

@max-rocket-internet
Copy link
Contributor

The PR was merged so I will close this but also bear in mind that cleaning up AWS resources created by k8s is out of the scope of this module. Also this might be interesting to you:
kubernetes/kubernetes#85023
kubernetes/enhancements#980

@github-actions
Copy link

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Nov 29, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants