Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Plan contains unexpected changes. #663

Closed
1 task done
colijack opened this issue Jan 6, 2020 · 3 comments
Closed
1 task done

Plan contains unexpected changes. #663

colijack opened this issue Jan 6, 2020 · 3 comments
Labels

Comments

@colijack
Copy link

colijack commented Jan 6, 2020

I have issues

I'm submitting a...

  • bug report

What is the current behavior?

After updating our EKS config (such as to include tags) a plan includes unexpected changes.

If this is a bug, how to reproduce? Please include a code sample if relevant.

I destroyed our environment, re-applied our terrafrorm to it, and finally made a simple change to our config:

tags = {"bob" = "BEN"}

A subsequent plan contained the expected changes but also some surprises. Note in the following code block I've omitted the expected changes and I've replaced some values with "...":

  # (config refers to values not yet known)
 <= data "aws_iam_policy_document" "worker_autoscaling"  {
      + id   = (known after apply)
      + json = (known after apply)

      + statement {
          + actions   = [
              + "autoscaling:DescribeAutoScalingGroups",
              + "autoscaling:DescribeAutoScalingInstances",
              + "autoscaling:DescribeLaunchConfigurations",
              + "autoscaling:DescribeTags",
              + "ec2:DescribeLaunchTemplateVersions",
            ]
          + effect    = "Allow"
          + resources = [
              + "*",
            ]
          + sid       = "eksWorkerAutoscalingAll"
        }
      + statement {
          + actions   = [
              + "autoscaling:SetDesiredCapacity",
              + "autoscaling:TerminateInstanceInAutoScalingGroup",
              + "autoscaling:UpdateAutoScalingGroup",
            ]
          + effect    = "Allow"
          + resources = [
              + "*",
            ]
          + sid       = "eksWorkerAutoscalingOwn"

          + condition {
              + test     = "StringEquals"
              + values   = [
                  + "owned",
                ]
              + variable = "autoscaling:ResourceTag/kubernetes.io/cluster/accounts"
            }
          + condition {
              + test     = "StringEquals"
              + values   = [
                  + "true",
                ]
              + variable = "autoscaling:ResourceTag/k8s.io/cluster-autoscaler/enabled"
            }
        }
    }

  # module.eks.data.template_file.kubeconfig will be read during apply
  # (config refers to values not yet known)
 <= data "template_file" "kubeconfig"  {
      + id       = (known after apply)
      + rendered = (known after apply)
      + template = <<~EOT
...
      + vars     = {
          + "aws_authenticator_additional_args" = ""
          + "aws_authenticator_command"         = "aws-iam-authenticator"
          + "aws_authenticator_command_args"    = <<~EOT
                        - "token"
                        - "-i"
                        - "..."
            EOT
          + "aws_authenticator_env_variables"   = ""
          + "cluster_auth_base64"               = "..."
          + "endpoint"                          = ...
          + "kubeconfig_name"                   = ...
        }
    }

  # module.eks.data.template_file.userdata[0] will be read during apply
  # (config refers to values not yet known)
 <= data "template_file" "userdata"  {
      + id       = (known after apply)
      + rendered = (known after apply)
      + template = ...
      + vars     = {
          + "additional_userdata"  = ""
          + "bootstrap_extra_args" = ""
          + "cluster_auth_base64"  = "..."
          + "cluster_name"         = ...
          + "endpoint"             = ...
          + "kubelet_extra_args"   = ""
          + "platform"             = "linux"
          + "pre_userdata"         = ""
        }
    }


  # module.eks.aws_iam_policy.worker_autoscaling[0] will be updated in-place
  ~ resource "aws_iam_policy" "worker_autoscaling" {
        arn         = ...
        description = ...
        id          = ...
        name        = ...
        name_prefix = ...
        path        = ...
      ~ policy      = jsonencode(
            {
              - Statement = [
                  - {
                      - Action   = [
                          - "ec2:DescribeLaunchTemplateVersions",
                          - "autoscaling:DescribeTags",
                          - "autoscaling:DescribeLaunchConfigurations",
                          - "autoscaling:DescribeAutoScalingInstances",
                          - "autoscaling:DescribeAutoScalingGroups",
                        ]
                      - Effect   = "Allow"
                      - Resource = "*"
                      - Sid      = "eksWorkerAutoscalingAll"
                    },
                  - {
                      - Action    = [
                          - "autoscaling:UpdateAutoScalingGroup",
                          - "autoscaling:TerminateInstanceInAutoScalingGroup",
                          - "autoscaling:SetDesiredCapacity",
                        ]
                      - Condition = {
                          - StringEquals = {
                              - autoscaling:ResourceTag/k8s.io/cluster-autoscaler/enabled = "true"
                              - autoscaling:ResourceTag/kubernetes.io/cluster/accounts    = "owned"
                            }
                        }
                      - Effect    = "Allow"
                      - Resource  = "*"
                      - Sid       = "eksWorkerAutoscalingOwn"
                    },
                ]
              - Version   = "2012-10-17"
            }
        ) -> (known after apply)
    }

  # module.eks.aws_launch_configuration.workers[0] must be replaced
+/- resource "aws_launch_configuration" "workers" {
        associate_public_ip_address      = false
        ebs_optimized                    = true
        enable_monitoring                = true
        iam_instance_profile             = ...
      ~ id                               = ... -> (known after apply)
        image_id                         = ...
        instance_type                    = "t3a.medium"
      + key_name                         = (known after apply)
      ~ name                             = ... -> (known after apply)
        name_prefix                      = ...
        security_groups                  = [
            "sg-0fe44028fce916e48",
        ]
      ~ user_data_base64                 = "..." -> (known after apply) # forces replacement
      - vpc_classic_link_security_groups = [] -> null

      + ebs_block_device {
          + delete_on_termination = (known after apply)
          + device_name           = (known after apply)
          + encrypted             = (known after apply)
          + iops                  = (known after apply)
          + no_device             = (known after apply)
          + snapshot_id           = (known after apply)
          + volume_size           = (known after apply)
          + volume_type           = (known after apply)
        }

      ~ root_block_device {
            delete_on_termination = true
          ~ encrypted             = false -> (known after apply)
            iops                  = 0
            volume_size           = 100
            volume_type           = "gp2"
        }
    }

Of particular interest is the bit about ebs_block_device and the change the policy section. Although I'm guessing these changes are safe and/or no-ops (yet to dig in to prove this) the fact they show up in the plan at all seems surprising and leaves getting simple changes approved more difficult than they should be.

Most of this can be reproduced with a very simple EKS config (though admittedly not things like ebs_block_device):

module "eks" {
  source                    = "terraform-aws-modules/eks/aws"
  cluster_name              = var.cluster-name
  subnets                   = module.vpc.private_subnet_ids
  cluster_enabled_log_types = ["api", "authenticator", "controllerManager"]
  //tags = {"bob" = "BEN"}

  cluster_endpoint_public_access  = true
  cluster_endpoint_private_access = true

  workers_additional_policies = [aws_iam_policy.eks-cloudwatch-logs-policy.arn]
  vpc_id                      = module.vpc.vpc_id

  worker_create_security_group  = true
  cluster_create_security_group = true
}

What's the expected behavior?

In this case I'm not clear some of these changes should be in the plan output.

Are you able to fix this problem and submit a PR? Link here if you have already.

Environment details

  • Affected module version: 7.0.1
  • OS: Windows
  • Terraform version: 0.12.18

Any other relevant info

@stale
Copy link

stale bot commented Apr 6, 2020

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@stale stale bot added the stale label Apr 6, 2020
@stale
Copy link

stale bot commented May 6, 2020

This issue has been automatically closed because it has not had recent activity since being marked as stale.

@stale stale bot closed this as completed May 6, 2020
@github-actions
Copy link

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Nov 26, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

No branches or pull requests

1 participant