Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Resources not destroyed when using existing cluster #344

Open
chrislovecnm opened this issue Apr 18, 2022 · 5 comments
Open

Resources not destroyed when using existing cluster #344

chrislovecnm opened this issue Apr 18, 2022 · 5 comments

Comments

@chrislovecnm
Copy link

Summary

When I do a terraform destroy multiple k8s resources are not cleaned up.

Steps to reproduce the behavior

Install JX3 on an existing cluster

Expected behavior

All JX3 k8s resources are removed.

Actual behavior

The following resources are left behind

  1. All namespaces: jx-git-operator jx-production jx-staging jx-vault kuberhealthy nginx secret-infra tekton-piplines
  2. CRDS
  3. ClusterRoleBindings
  4. Two EBS volumes (nexus and another)

Terraform version

The output of terraform version is:

Terraform v0.13.5
+ provider registry.terraform.io/hashicorp/aws v3.75.1
+ provider registry.terraform.io/hashicorp/cloudinit v2.2.0
+ provider registry.terraform.io/hashicorp/helm v2.5.1
+ provider registry.terraform.io/hashicorp/kubernetes v2.10.0
+ provider registry.terraform.io/hashicorp/local v2.2.2
+ provider registry.terraform.io/hashicorp/null v3.1.1
+ provider registry.terraform.io/hashicorp/random v3.1.2
+ provider registry.terraform.io/hashicorp/template v2.2.0
+ provider registry.terraform.io/terraform-aws-modules/http v2.4.

Module version

v1.18.11

Operating system

Linux running inside of a container

@ankitm123
Copy link
Member

This issue is because the v18 eks module is very restrictive when it comes to security groups. So basically ur node to node and control plane to node connections are not working.
Try this:

cluster_security_group_additional_rules = {
    egress_nodes_ephemeral_ports_tcp = {
      description                = "To node 1025-65535"
      protocol                   = "tcp"
      from_port                  = 1025
      to_port                    = 65535
      type                       = "egress"
      source_node_security_group = true
    }
  }
  # Extend node-to-node security group rules
  node_security_group_additional_rules = {
    ingress_self_all = {
      description = "Node to node all ports/protocols"
      protocol    = "-1"
      from_port   = 0
      to_port     = 0
      type        = "ingress"
      self        = true
    }
    egress_all = {
      description      = "Node all egress"
      protocol         = "-1"
      from_port        = 0
      to_port          = 0
      type             = "egress"
      cidr_blocks      = ["0.0.0.0/0"]
      ipv6_cidr_blocks = ["::/0"]
    }
    ingress_cluster_all = {
      description                   = "Cluster to node all ports/protocols"
      protocol                      = "-1"
      from_port                     = 0
      to_port                       = 0
      type                          = "ingress"
      source_cluster_security_group = true
    }
  }

@ankitm123
Copy link
Member

Actually resources not getting destroyed is most likely a bug, unrelated to what I posted above. I will look into this issue.

@chrislovecnm
Copy link
Author

I am getting more errors deleting a cluster, which was up and running correctly

Error: error deleting S3 Bucket (logs-foo-20220420182851392800000001): BucketNotEmpty: The bucket you tried to delete is not empty
	status code: 409, request id: 


Error: Kubernetes cluster unreachable: the server has asked for the client to provide credentials

I have set the KUBECONFIG and TF_KUBECONFIG evn variables and terraform-helm is not picking up

@chrislovecnm
Copy link
Author

KUBE_CONFIG_PATH=/path/to/kubeconfig

Helped with the helm resources.

@chrislovecnm
Copy link
Author

chrislovecnm commented May 4, 2022

$ k get configmaps
NAME                          DATA   AGE
config                        1      13d
ingress-config                5      13d
jenkins-x-docker-registry     2      13d
jenkins-x-extensions          2      13d
jx-install-config             1      13d
kapp-config                   1      13d
kube-root-ca.crt              1      13d
lighthouse-external-plugins   1      13d
nexus                         1      13d
plugins                       1      13d

These are not destroyed.

cert-manager           Active   13d
external-dns-private   Active   11d
jx                     Active   13d
jx-git-operator        Active   13d
jx-production          Active   13d
jx-staging             Active   13d
kuberhealthy           Active   13d
nginx                  Active   13d
secret-infra           Active   13d
tekton-pipelines       Active   13d

None of these are cleaned up. Now I don't think we want to delete jx-staging or jx-production :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants