Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error: Delete "http://localhost/api/v1/namespaces/kube-system/configmaps/aws-auth": dial tcp 127.0.0.1:80: connect: connection refused #978

Closed
ankur-gupta-guavus opened this issue Aug 17, 2020 · 13 comments

Comments

@ankur-gupta-guavus
Copy link

As soon as I try to delete EKS cluster, it fails at k8s config map (aws-auth) deletion:

Error: Delete "http://localhost/api/v1/namespaces/kube-system/configmaps/aws-auth": dial tcp 127.0.0.1:80: connect: connection refused

Provider Config:

data "aws_eks_cluster" "cluster" {
  name = module.eks.cluster_id
}

data "aws_eks_cluster_auth" "cluster" {
  name = module.eks.cluster_id
}

provider "kubernetes" {
  host                   = data.aws_eks_cluster.cluster.endpoint
  cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
  token                  = data.aws_eks_cluster_auth.cluster.token
  load_config_file       = false
  version                = "~> 1.11"
  alias                  = "giq"

  exec {
    api_version = "client.authentication.k8s.io/v1alpha1"
    args        = ["token", "--cluster-id", data.aws_eks_cluster.cluster.endpoint]
    command     = "aws-iam-authenticator"
  }

}

data "aws_availability_zones" "available" {
}

module "eks" {

  providers = {
    aws = aws.giq
    kubernetes = kubernetes.giq
  }

  source          = "./eks"

Although, I have given load_config as false in kubernetes provider, I thought this can be related to kubeconfig (one created by using write_kubeconfig as true) being deleted before this config map, so i added this:

resource "kubernetes_config_map" "aws_auth" {
  count      = var.create_eks && var.manage_aws_auth ? 1 : 0
  depends_on = [
    null_resource.wait_for_cluster[0],
    local_file.kubeconfig
  ]

But, even now i get the same error, i can confirm my kubeconfig file exists, but post deletion of node_groups i get this error. Any help will be highly appreciated. Many thanks in advance.

@dpiddockcmp
Copy link
Contributor

This is an error that sometimes comes up but is hard to reproduce.

The kubernetes provider, as configured, does not know anything about the kubeconfig file generated by the module. There is no relationship between them. You don't even need to write the kubeconfig file. The provider is supposed to get all of its configuration from the data sources that you are passing in.

I'm guessing you are doing a straight terraform destroy?

The easiest solution is to drop the kubernetes_config_map resource from the terraform state and then continue with the destroy.

terraform state rm module.eks.kubernetes_config_map.aws_auth
terraform destroy

@Vermyndax
Copy link

This happens every time I try to delete the EKS cluster. Doing the manual remove from the state file resolved it, but makes it painful for CI/CD automation.

@barryib
Copy link
Member

barryib commented Oct 8, 2020

When you got this kind of error, generally it's because your kubernetes provider is miss-configured (due to a bug or human error).

Can you please try with the latest version of the kubernetes provider and also remove exec from the provider. You don't need that, because you're already using token for authentication.

@Puneeth-n
Copy link

This is an error that sometimes comes up but is hard to reproduce.

The kubernetes provider, as configured, does not know anything about the kubeconfig file generated by the module. There is no relationship between them. You don't even need to write the kubeconfig file. The provider is supposed to get all of its configuration from the data sources that you are passing in.

I'm guessing you are doing a straight terraform destroy?

The easiest solution is to drop the kubernetes_config_map resource from the terraform state and then continue with the destroy.

terraform state rm module.eks.kubernetes_config_map.aws_auth
terraform destroy

Thanks this works

@barryib
Copy link
Member

barryib commented Oct 25, 2020

Closing this since, you resolved your issue. Feel free to reopen it if needed.

@barryib barryib closed this as completed Oct 25, 2020
@jspawar
Copy link

jspawar commented Dec 11, 2020

@barryib We are running into pretty much the same exact issue: the kubeconfig resource is being deleted before the aws_auth ConfigMap resource is, so we would like to reopen this issue please. We're following the guidance of the following guide from HashiCorp: https://github.com/hashicorp/learn-terraform-provision-eks-cluster.

Can you all provide some guidance on how to actually mitigate this? That is, per this comment, what could we have misconfigured in the kubernetes provider:

When you got this kind of error, generally it's because your kubernetes provider is miss-configured (due to a bug or human error).

We are configuring our kubernetes provider exactly like this: https://github.com/hashicorp/learn-terraform-provision-eks-cluster/blob/master/kubernetes.tf

The suggestion to run terraform state rm is strongly not preferable for the same reasons originally provided (automation), so would love to know if there is some workaround, and if not, if we should provide some changes to accommodate?

@vvchik
Copy link

vvchik commented Dec 23, 2020

This bug hit me in the latest terraform, 0.14 version.

@bohdanyurov-gl
Copy link

Yes, it has started happening again after upgrading Terraform to 0.14 and module to 14.0

@Chinikins
Copy link

yes, can confirm this is still an issue - please re-open

@daserose
Copy link

This is an error that sometimes comes up but is hard to reproduce.

The kubernetes provider, as configured, does not know anything about the kubeconfig file generated by the module. There is no relationship between them. You don't even need to write the kubeconfig file. The provider is supposed to get all of its configuration from the data sources that you are passing in.

I'm guessing you are doing a straight terraform destroy?

The easiest solution is to drop the kubernetes_config_map resource from the terraform state and then continue with the destroy.

terraform state rm module.eks.kubernetes_config_map.aws_auth
terraform destroy

YES!!!

@bkielbasa
Copy link

same issue here. Is there any option to fix it? Removing the state before destroying is hacky

@dustyketchum
Copy link

When destroying an EKS cluster, why is terraform using kubectl to destroy the aws-auth of a cluster that is going to be destroyed by AWS API calls milliseconds later?
Know what can happen if you use terraform to destroy an eks cluster created by this module when your kubeconfig happens to point to a DIFFERENT eks cluster, and you don't know this time bomb is waiting for you? It might just remove all authentication from the DIFFERENT cluster, then destroy the cluster you actually intended to tear down. Later you might get to figure this out when you see delete /api/v1/namespaces/kube-system/configmaps/aws-auth in cloudwatch.... and research, and end up here on this issue.
Solution - on destroy, this module should just remove the state for the aws_eks_cluster_auth as documented above without calling kubectl to destroy aws-auth. Unless of course there is some valid reason why anyone would want to destroy aws-auth on a cluster without also destroying the cluster, I can't think of one, since it renders the cluster useless until repaired. This would solve the original problem with this issue, and also do less collateral damage.

@github-actions
Copy link

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Nov 13, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests