Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

"does not have attribute 'endpoint'" blocking "terraform destroy" #262

Closed
brant4test opened this issue Feb 1, 2019 · 9 comments
Closed

Comments

@brant4test
Copy link

I have issues

I'm submitting a...

  • [] bug report

What is the current behavior?

$ terraform destroy

...
Error: Error applying plan:
3 error(s) occurred:
* module.eks.output.config_map_aws_auth: Resource 'data.template_file.config_map_aws_auth' does not have attribute 'rendered' for variable 'data.template_file.config_map_aws_auth.rendered'
* module.eks.output.kubeconfig: Resource 'data.template_file.kubeconfig' does not have attribute 'rendered' for variable 'data.template_file.kubeconfig.rendered'
* module.eks.output.cluster_endpoint: Resource 'aws_eks_cluster.this' does not have attribute 'endpoint' for variable 'aws_eks_cluster.this.endpoint'

If this is a bug, how to reproduce? Please include a code sample if relevant.

  1. Set cluster_name a bit longger, say
    locals {
    cluster_name = "************************-eks-${random_string.suffix.result}"
    }
    you would get an error while running
    $ terraform apply
Error: Error applying plan:
3 error(s) occurred:
* module.eks.aws_iam_role_policy_attachment.cluster_AmazonEKSServicePolicy: Resource 'aws_iam_role.cluster' not found for variable 'aws_iam_role.cluster.name'
* module.eks.aws_iam_role_policy_attachment.cluster_AmazonEKSClusterPolicy: Resource 'aws_iam_role.cluster' not found for variable 'aws_iam_role.cluster.name'
* module.eks.aws_iam_role.cluster: "name_prefix" cannot be longer than 32 characters, name is limited to 64
  1. Then you move on to run
    $ terraform destroy
    You would get that error.

What's the expected behavior?

terraform destroy without any errors

Are you able to fix this problem and submit a PR? Link here if you have already.

Environment details

  • Affected module version:
    Terraform-aws-eks v2.1.0

  • OS: $ lsb_release -a
    No LSB modules are available.
    Distributor ID: Ubuntu
    Description: Ubuntu 16.04.3 LTS
    Release: 16.04
    Codename: xenial

  • Terraform version:
    Terraform v0.11.11

Any other relevant info

"remove s3 contents along with dynamodb Items" is not a good way to fix it. Any suggestions? Thanks!

@skang0601
Copy link
Contributor

Can you post the state?
I suspect what's happening here, which I've also seen happen in other modules, is that certain resources are being created when you do the apply. Then the apply command fails due to some reason, in this case the character length limit, and then this partial apply/resources aren't saved to the state or corrupts the state. If you try a terraform refresh that may alleviate the issue, however this seems more like an issue with Terraform rather than this module itself.

@cwiggs
Copy link

cwiggs commented Mar 1, 2019

I was having a similar issue but was only get 1 error to begin with; that error was:

module.eks.output.config_map_aws_auth: Resource 'data.template_file.config_map_aws_auth' does not have attribute 'rendered' for variable 'data.template_file.config_map_aws_auth.rendered'

So as @skang0601 suggested I looked at my state, odd thing is, it seems to be empty:

$ terraform state pull
{
    "version": 3,
    "terraform_version": "0.11.11",
    "serial": 9,
    "lineage": "57e93692-87c5-b94e-1f98-d79f4e2e6232",
    "modules": [
        {
            "path": [
                "root"
            ],
            "outputs": {},
            "resources": {},
            "depends_on": []
        },
        {
            "path": [
                "root",
                "eks"
            ],
            "outputs": {},
            "resources": {},
            "depends_on": []
        }
    ]
}

Ran terraform destroy again, thinking it would say everything was already gone, and it doesn't. instead I get more errors:

* module.eks.output.config_map_aws_auth: Resource 'data.template_file.config_map_aws_auth' does not have attribute 'rendered' for variable 'data.template_file.config_map_aws_auth.rendered'
* module.eks.output.cluster_id: Resource 'aws_eks_cluster.this' does not have attribute 'id' for variable 'aws_eks_cluster.this.id'
* module.eks.output.cluster_version: Resource 'aws_eks_cluster.this' does not have attribute 'version' for variable 'aws_eks_cluster.this.version'
* module.eks.output.cluster_endpoint: Resource 'aws_eks_cluster.this' does not have attribute 'endpoint' for variable 'aws_eks_cluster.this.endpoint'
* module.eks.output.kubeconfig: Resource 'data.template_file.kubeconfig' does not have attribute 'rendered' for variable 'data.template_file.kubeconfig.rendered'
* module.eks.output.cluster_certificate_authority_data: Resource 'aws_eks_cluster.this' does not have attribute 'certificate_authority.0.data' for variable 'aws_eks_cluster.this.certificate_authority.0.data'

What is even more weird is when I run a terraform plan -destroy it says "No changes. Infrastructure is up-to-date."

So at the end of the day it seems like everything is destroyed, but terraform is in a state where planning thinks it is destroyed, but destroy doesn't, weird.

Edit: I forgot to mention terraform refresh doesn't seem to fix the issue.

@max-rocket-internet
Copy link
Contributor

Is this a problem with this module? Or a general TF issue?

@cwiggs
Copy link

cwiggs commented Mar 6, 2019

@max-rocket-internet that is a good question. I was thinking of trying to re-create some of the module in a normal TF project (not a module) for testing. Then I could remove pieces of the code one at a time to see what resource causes the issue. If someone has some extra time go ahead and give it a try...or hopefully TF 0.12 will come out soon and fix a lot of issues :)

@brant4test
Copy link
Author

brant4test commented Mar 8, 2019

First time destroying a EKS stack built based on terraform-aws-eks v2.2.1, got

Error: Error applying plan:

2 error(s) occurred:

* module.eks.output.config_map_aws_auth: Resource 'data.template_file.config_map_aws_auth' does not have attribute 'rendered' for variable 'data.template_file.config_map_aws_auth.rendered'
* local.worker_groups_launch_template: local.worker_groups_launch_template: Resource 'aws_security_group.worker_group_mgmt_two' does not have attribute 'id' for variable 'aws_security_group.worker_group_mgmt_two.id'

Then $ terraform refresh
and $ terraform destroy, got

Error: Error applying plan:

4 error(s) occurred:

* module.eks.output.cluster_endpoint: Resource 'aws_eks_cluster.this' does not have attribute 'endpoint' for variable 'aws_eks_cluster.this.endpoint'
* module.eks.output.kubeconfig: Resource 'data.template_file.kubeconfig' does not have attribute 'rendered' for variable 'data.template_file.kubeconfig.rendered'
* module.eks.output.config_map_aws_auth: Resource 'data.template_file.config_map_aws_auth' does not have attribute 'rendered' for variable 'data.template_file.config_map_aws_auth.rendered'
* local.worker_groups_launch_template: local.worker_groups_launch_template: Resource 'aws_security_group.worker_group_mgmt_two' does not have attribute 'id' for variable 'aws_security_group.worker_group_mgmt_two.id'

@pst
Copy link

pst commented Jun 15, 2019

I'm seeing this issue too. I'm referencing the clusters' output in module input variables. This works fine when creating and destroying. However when executing destroy a second time I see the Resource 'aws_eks_cluster.current' does not have attribute 'endpoint' for variable 'aws_eks_cluster.current.endpoint' and similar.

Below is an example output from the first destroy and then running destroy again.

Step #3 - "terraform destroy": module.eks_zero.module.cluster.aws_security_group.masters: Destruction complete after 1s
Step #3 - "terraform destroy": module.eks_zero.module.cluster.aws_subnet.current[0]: Destruction complete after 1s
Step #3 - "terraform destroy": module.eks_zero.module.cluster.aws_subnet.current[1]: Destruction complete after 1s
Step #3 - "terraform destroy": module.eks_zero.module.cluster.aws_vpc.current: Destroying... (ID: vpc-0c7beb6c314f40d4e)
Step #3 - "terraform destroy": module.eks_zero.module.cluster.aws_vpc.current: Destruction complete after 0s
Step #3 - "terraform destroy": module.eks_zero.module.cluster.aws_iam_role.master: Destruction complete after 0s
Step #3 - "terraform destroy": 
Step #3 - "terraform destroy": Destroy complete! Resources: 31 destroyed.
Finished Step #3 - "terraform destroy"
2019/06/15 14:22:27 Step Step #3 - "terraform destroy" finished
2019/06/15 14:22:27 status changed to "DONE"
DONE
[pst@pst-ryzen5 terraform-kubestack]$ cloud-build-local --config=cloudbuild-cleanup.yaml --dryrun=false .
2019/06/15 14:23:09 Warning: The server docker version installed (dev) is different from the one used in GCB (18.09.0)
2019/06/15 14:23:09 Warning: The client docker version installed (18.06.3) is different from the one used in GCB (18.09.0)
Using default tag: latest
latest: Pulling from cloud-builders/metadata
Digest: sha256:fc4ffde8edd8abe888f3cd3161f964b7b7323570bd179fc902494d13ecd41c1e
Status: Image is up to date for gcr.io/cloud-builders/metadata:latest
2019/06/15 14:23:14 Started spoofed metadata server
2019/06/15 14:23:14 Build id = localbuild_edc83167-bef8-4d93-94d5-711ad131da29
2019/06/15 14:23:14 status changed to "BUILD"
BUILD
Starting Step #0 - "docker build"
Step #0 - "docker build": Already have image (with digest): gcr.io/cloud-builders/docker
Step #0 - "docker build": Sending build context to Docker daemon  6.144kB
Step #0 - "docker build": Step 1/24 : FROM python:2.7-slim AS builder
Step #0 - "docker build":  ---> 48e3247f2a19
Step #0 - "docker build": Step 2/24 : RUN apt-get update && apt-get install -y     ca-certificates     curl     gcc     unzip     python-virtualenv
Step #0 - "docker build":  ---> Using cache
Step #0 - "docker build":  ---> a6e972d482b8
Step #0 - "docker build": Step 3/24 : RUN mkdir -p /opt/bin
Step #0 - "docker build":  ---> Using cache
Step #0 - "docker build":  ---> 8bc839b8decd
Step #0 - "docker build": Step 4/24 : ARG KUBECTL_VERSION=v1.14.0
Step #0 - "docker build":  ---> Using cache
Step #0 - "docker build":  ---> 0be392095f1f
Step #0 - "docker build": Step 5/24 : ARG KUSTOMIZE_VERSION=2.0.3
Step #0 - "docker build":  ---> Using cache
Step #0 - "docker build":  ---> 50e08a2d1dab
Step #0 - "docker build": Step 6/24 : ARG TERRAFORM_VERSION=0.11.13
Step #0 - "docker build":  ---> Using cache
Step #0 - "docker build":  ---> 0ef854ba82e7
Step #0 - "docker build": Step 7/24 : ARG AWS_IAM_AUTHENTICATOR_VERSION=0.3.0
Step #0 - "docker build":  ---> Using cache
Step #0 - "docker build":  ---> 22a9635cc532
Step #0 - "docker build": Step 8/24 : ARG GOOGLE_CLOUD_SDK_VERSION=239.0.0
Step #0 - "docker build":  ---> Using cache
Step #0 - "docker build":  ---> 45b4eacfa0fc
Step #0 - "docker build": Step 9/24 : ARG AZURE_CLI_VERSION=2.0.63
Step #0 - "docker build":  ---> Using cache
Step #0 - "docker build":  ---> 3e7453ba44c3
Step #0 - "docker build": Step 10/24 : RUN echo "KUBECTL_VERSION: ${KUBECTL_VERSION}"     && curl -Lo /opt/bin/kubectl https://storage.googleapis.com/kubernetes-release/release/${KUBECTL_VERSION}/bin/linux/amd64/kubectl     && chmod +x /opt/bin/kubectl     && /opt/bin/kubectl version --client=true
Step #0 - "docker build":  ---> Using cache
Step #0 - "docker build":  ---> cb767cad5c95
Step #0 - "docker build": Step 11/24 : RUN echo "KUSTOMIZE_VERSION: ${KUSTOMIZE_VERSION}"     && curl -Lo /opt/bin/kustomize https://github.com/kubernetes-sigs/kustomize/releases/download/v${KUSTOMIZE_VERSION}/kustomize_${KUSTOMIZE_VERSION}_linux_amd64     && chmod +x /opt/bin/kustomize     && /opt/bin/kustomize version
Step #0 - "docker build":  ---> Using cache
Step #0 - "docker build":  ---> 1e0fc4bf1b56
Step #0 - "docker build": Step 12/24 : RUN echo "TERRAFORM_VERSION: ${TERRAFORM_VERSION}"     && curl -LO https://releases.hashicorp.com/terraform/${TERRAFORM_VERSION}/terraform_${TERRAFORM_VERSION}_linux_amd64.zip     && unzip terraform_${TERRAFORM_VERSION}_linux_amd64.zip -d /opt/bin     && chmod +x /opt/bin/terraform     && /opt/bin/terraform version
Step #0 - "docker build":  ---> Using cache
Step #0 - "docker build":  ---> 92f1204d1cf9
Step #0 - "docker build": Step 13/24 : RUN echo "AWS_IAM_AUTHENTICATOR_VERSION: ${AWS_IAM_AUTHENTICATOR_VERSION}"     && curl -Lo /opt/bin/aws-iam-authenticator https://github.com/kubernetes-sigs/aws-iam-authenticator/releases/download/v${AWS_IAM_AUTHENTICATOR_VERSION}/heptio-authenticator-aws_${AWS_IAM_AUTHENTICATOR_VERSION}_linux_amd64     && chmod +x /opt/bin/aws-iam-authenticator     && /opt/bin/aws-iam-authenticator
Step #0 - "docker build":  ---> Using cache
Step #0 - "docker build":  ---> 38b17c6a7f60
Step #0 - "docker build": Step 14/24 : RUN echo "AWS_CLI_VERSION: N/A"     && curl "https://s3.amazonaws.com/aws-cli/awscli-bundle.zip" -o "awscli-bundle.zip"     && unzip awscli-bundle.zip     && ./awscli-bundle/install -i /opt/aws -b /opt/bin/aws     && /opt/bin/aws --version
Step #0 - "docker build":  ---> Using cache
Step #0 - "docker build":  ---> b0b66ad33360
Step #0 - "docker build": Step 15/24 : RUN echo "GOOGLE_CLOUD_SDK_VERSION: ${GOOGLE_CLOUD_SDK_VERSION}"     && curl -O https://dl.google.com/dl/cloudsdk/channels/rapid/downloads/google-cloud-sdk-${GOOGLE_CLOUD_SDK_VERSION}-linux-x86_64.tar.gz     && tar zxvf google-cloud-sdk-${GOOGLE_CLOUD_SDK_VERSION}-linux-x86_64.tar.gz google-cloud-sdk     && mv google-cloud-sdk /opt/google-cloud-sdk     && /opt/google-cloud-sdk/bin/gcloud --version
Step #0 - "docker build":  ---> Using cache
Step #0 - "docker build":  ---> 423830ed392e
Step #0 - "docker build": Step 16/24 : RUN echo "AZURE_CLI_VERSION: ${AZURE_CLI_VERSION}"     && virtualenv /opt/azure/     && /opt/azure/bin/pip install --no-cache-dir         "urllib3<1.25,>=1.21.1"         azure-cli==${AZURE_CLI_VERSION}         azure-nspkg         azure-mgmt-nspkg     && echo '#!/usr/bin/env bash\n/opt/azure/bin/python -m azure.cli "$@"'         > /opt/bin/az     && chmod +x /opt/bin/az     && /opt/bin/az --version
Step #0 - "docker build":  ---> Using cache
Step #0 - "docker build":  ---> 02ba422b6adc
Step #0 - "docker build": Step 17/24 : COPY nss-wrapper /opt/bin/nss-wrapper
Step #0 - "docker build":  ---> Using cache
Step #0 - "docker build":  ---> 53a82472944c
Step #0 - "docker build": Step 18/24 : FROM python:2.7-slim
Step #0 - "docker build":  ---> 48e3247f2a19
Step #0 - "docker build": Step 19/24 : RUN apt-get update && apt-get install -y       ca-certificates       git       jq       wget       openssh-client       dnsutils       libnss-wrapper     && rm -rf /var/lib/apt/lists/*
Step #0 - "docker build":  ---> Using cache
Step #0 - "docker build":  ---> ebc8cd0422c3
Step #0 - "docker build": Step 20/24 : COPY --from=builder /opt /opt
Step #0 - "docker build":  ---> Using cache
Step #0 - "docker build":  ---> a43ed5033ee4
Step #0 - "docker build": Step 21/24 : ENV PATH=/opt/bin:/opt/google-cloud-sdk/bin:$PATH     HOME=/infra/.user
Step #0 - "docker build":  ---> Using cache
Step #0 - "docker build":  ---> a11957b36e22
Step #0 - "docker build": Step 22/24 : WORKDIR /infra
Step #0 - "docker build":  ---> Using cache
Step #0 - "docker build":  ---> d02bfc9b6148
Step #0 - "docker build": Step 23/24 : ENTRYPOINT ["/opt/bin/nss-wrapper"]
Step #0 - "docker build":  ---> Using cache
Step #0 - "docker build":  ---> aadcdf561309
Step #0 - "docker build": Step 24/24 : CMD bash
Step #0 - "docker build":  ---> Using cache
Step #0 - "docker build":  ---> c925e2e2332c
Step #0 - "docker build": Successfully built c925e2e2332c
Step #0 - "docker build": Successfully tagged kbst-infra-automation:bootstrap
Finished Step #0 - "docker build"
2019/06/15 14:23:15 Step Step #0 - "docker build" finished
Starting Step #1 - "terraform init"
Step #1 - "terraform init": Already have image: kbst-infra-automation:bootstrap
Step #1 - "terraform init": Initializing modules...
Step #1 - "terraform init": - module.eks_zero
Step #1 - "terraform init":   Getting source "../aws/cluster"
Step #1 - "terraform init": - module.eks_zero.cluster_metadata
Step #1 - "terraform init":   Getting source "../../common/metadata"
Step #1 - "terraform init": - module.eks_zero.cluster
Step #1 - "terraform init":   Getting source "../_modules/eks"
Step #1 - "terraform init": - module.eks_zero.cluster.cluster_services
Step #1 - "terraform init":   Getting source "../../../common/cluster_services"
Step #1 - "terraform init": - module.eks_zero.cluster.node_pool
Step #1 - "terraform init":   Getting source "./node_pool"
Step #1 - "terraform init": 
Step #1 - "terraform init": Initializing the backend...
Step #1 - "terraform init": 
Step #1 - "terraform init": Initializing provider plugins...
Step #1 - "terraform init": - Checking for available provider plugins on https://releases.hashicorp.com...
Step #1 - "terraform init": - Downloading plugin for provider "kubernetes" (1.7.0)...
Step #1 - "terraform init": - Downloading plugin for provider "null" (2.1.2)...
Step #1 - "terraform init": - Downloading plugin for provider "template" (2.1.2)...
Step #1 - "terraform init": - Downloading plugin for provider "google" (2.8.0)...
Step #1 - "terraform init": - Downloading plugin for provider "azurerm" (1.30.1)...
Step #1 - "terraform init": - Downloading plugin for provider "aws" (1.60.0)...
Step #1 - "terraform init": - Downloading plugin for provider "external" (1.1.2)...
Step #1 - "terraform init": 
Step #1 - "terraform init": The following providers do not have any version constraints in configuration,
Step #1 - "terraform init": so the latest version was installed.
Step #1 - "terraform init": 
Step #1 - "terraform init": To prevent automatic upgrades to new major versions that may contain breaking
Step #1 - "terraform init": changes, it is recommended to add version = "..." constraints to the
Step #1 - "terraform init": corresponding provider blocks in configuration, with the constraint strings
Step #1 - "terraform init": suggested below.
Step #1 - "terraform init": 
Step #1 - "terraform init": * provider.azurerm: version = "~> 1.30"
Step #1 - "terraform init": * provider.google: version = "~> 2.8"
Step #1 - "terraform init": 
Step #1 - "terraform init": Terraform has been successfully initialized!
Finished Step #1 - "terraform init"
2019/06/15 14:23:30 Step Step #1 - "terraform init" finished
Starting Step #2 - "terraform workspace"
Step #2 - "terraform workspace": Already have image: kbst-infra-automation:bootstrap
Finished Step #2 - "terraform workspace"
2019/06/15 14:23:32 Step Step #2 - "terraform workspace" finished
Starting Step #3 - "terraform destroy"
Step #3 - "terraform destroy": Already have image: kbst-infra-automation:bootstrap
Step #3 - "terraform destroy": data.aws_caller_identity.current: Refreshing state...
Step #3 - "terraform destroy": data.aws_region.current: Refreshing state...
Step #3 - "terraform destroy": data.aws_elb_hosted_zone_id.current: Refreshing state...
Step #3 - "terraform destroy": data.external.kustomize_build: Refreshing state...
Step #3 - "terraform destroy": data.aws_arn.current: Refreshing state...
Step #3 - "terraform destroy": 
Step #3 - "terraform destroy": Error: Error applying plan:
Step #3 - "terraform destroy": 
Step #3 - "terraform destroy": 3 error(s) occurred:
Step #3 - "terraform destroy": 
Step #3 - "terraform destroy": * module.eks_zero.module.cluster.module.node_pool.var.cluster_name: Resource 'aws_eks_cluster.current' does not have attribute 'name' for variable 'aws_eks_cluster.current.name'
Step #3 - "terraform destroy": * module.eks_zero.module.cluster.module.node_pool.var.cluster_endpoint: Resource 'aws_eks_cluster.current' does not have attribute 'endpoint' for variable 'aws_eks_cluster.current.endpoint'
Step #3 - "terraform destroy": * module.eks_zero.module.cluster.module.node_pool.var.cluster_ca: Resource 'aws_eks_cluster.current' does not have attribute 'certificate_authority.0.data' for variable 'aws_eks_cluster.current.certificate_authority.0.data'
Step #3 - "terraform destroy": 
Step #3 - "terraform destroy": Terraform does not automatically rollback in the face of errors.
Step #3 - "terraform destroy": Instead, your Terraform state file has been partially updated with
Step #3 - "terraform destroy": any resources that successfully completed. Please address the error
Step #3 - "terraform destroy": above and apply again to incrementally change your infrastructure.
Step #3 - "terraform destroy": 
Step #3 - "terraform destroy": 
Finished Step #3 - "terraform destroy"
2019/06/15 14:23:38 Step Step #3 - "terraform destroy" finished
2019/06/15 14:23:38 status changed to "ERROR"
ERROR
ERROR: build step 3 "kbst-infra-automation:bootstrap" failed: exit status 1
2019/06/15 14:23:40 Build finished with ERROR status

@pst
Copy link

pst commented Jun 15, 2019

For what it's worth, the aws_eks_cluster data source suffers from the same issue.

@max-rocket-internet
Copy link
Contributor

Closing this old issue. Feel free to open a new one when running against latest release 🙂

@github-actions
Copy link

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Nov 30, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants