Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ebs_block_device shows changes on every plan/apply after 2.7.0 update #8480

Closed
hylaride opened this issue Apr 29, 2019 · 24 comments
Closed

ebs_block_device shows changes on every plan/apply after 2.7.0 update #8480

hylaride opened this issue Apr 29, 2019 · 24 comments
Labels
bug Addresses a defect in current functionality. service/ec2 Issues and PRs that pertain to the ec2 service.

Comments

@hylaride
Copy link

hylaride commented Apr 29, 2019

HI!

Somewhere in the change between the 2.6.0 and 2.7.0 we have an ebs_block_device config that always shows changes to be applied for some of our instances. This is still occurring in 2.8.0.

[INFO] Switching to v0.11.13
[INFO] Switching completed
~ module.es_cluster.aws_instance.esnode[0]
    ebs_block_device.2659407853.iops:                    "3072" => ""
    ebs_block_device.2659407853.snapshot_id:             "" => "<computed>"
    ebs_block_device.2659407853.volume_id:               "vol-034b1cde3080b2a08" => "<computed>"

~ module.es_cluster.aws_instance.esnode[1]
    ebs_block_device.2659407853.iops:                    "3072" => ""
    ebs_block_device.2659407853.snapshot_id:             "" => "<computed>"
    ebs_block_device.2659407853.volume_id:               "vol-0f735193efd918596" => "<computed>"

~ module.es_cluster.aws_instance.esnode[2]
    ebs_block_device.2659407853.iops:                    "3072" => ""
    ebs_block_device.2659407853.snapshot_id:             "" => "<computed>"
    ebs_block_device.2659407853.volume_id:               "vol-0912bc7b901873436" => "<computed>"

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Terraform Version

terraform -v
Terraform v0.11.13
+ provider.archive v1.2.1
+ provider.aws v2.8.0
+ provider.null v2.1.1
+ provider.template v2.1.1

Affected Resource(s)

  • aws_instance

Terraform Configuration Files

variable "data_volume_size" {
  type        = "string"
  default     = "1024"
  description = "The size of the EBS data volume to attach to the nodes"
}

variable "data_volume_delete_on_termination" {
  type        = "string"
  default     = true
  description = "Whether the data volume should be destroyed on instance termination"
}


resource "aws_instance" "esnode" {
  count                = "${var.size}"
  ami                  = "${data.aws_ami.es_ami.image_id}"
  instance_type        = "${var.instance_type}"
  ebs_optimized        = true
  subnet_id            = "${element(var.subnet_ids, count.index)}"
  user_data            = "${data.template_file.elasticsearch_userdata.rendered}"
  monitoring           = "${var.enable_monitoring}"
  iam_instance_profile = "${aws_iam_instance_profile.elasticsearch.id}"

  vpc_security_group_ids = [
    "${var.security_groups}",
    "${aws_security_group.elasticsearch.id}",
    "${aws_security_group.elasticsearch_client.id}",
  ]

  root_block_device {
    volume_type           = "gp2"
    volume_size           = "250"
    delete_on_termination = true
  }

  ebs_block_device {
    device_name           = "/dev/sdf"
    volume_type           = "gp2"
    volume_size           = "${var.data_volume_size}"
    delete_on_termination = "${var.data_volume_delete_on_termination}"
    encrypted             = true
  }

  lifecycle {
    create_before_destroy = true
    ignore_changes        = ["user_data", "ami"]
  }

  tags {
    Role        = "elasticsearch"
    Name        = "${format("esnode-%02d", count.index + 1)}"
    Environment = "${var.environment}"
    ESCluster   = "${var.name}"
  }
}

Debug Output

Our debug output is littered with secrets. Let me know if there's someplace more secure I can send it other than GIST.

Panic Output

N/A

Expected Behavior

There were no changes to the EBS volumes, so it should just leave it alone

Actual Behavior

Every terraform plan/apply still shows changes to be done.

Steps to Reproduce

  1. terraform apply

Important Factoids

Rolling back to the 2.6.0 aws provider "fixes" this.

References

  • #0000
@nywilken nywilken added the service/ec2 Issues and PRs that pertain to the ec2 service. label May 2, 2019
@matthiasr
Copy link
Contributor

matthiasr commented May 6, 2019

Is this maybe related to #8343 @bflad?

I am running into this as well, going 1.56->2.8, but cannot reproduce it on a new instance, so I have not tried what happens after applying this (@hylaride is applying these changes safe, or will it destroy / detach the volume?)

Of note may be that I have this happen both inside and outside a module, but only for ebs_block_device embedded in an aws_instance resource.

I cannot discern a difference, but in case it's useful here is the snippet from terraform state pull for two cases that want change, and one that doesn't:

case 1

state

                            "ebs_block_device.#": "1",
                            "ebs_block_device.3905984573.delete_on_termination": "true",
                            "ebs_block_device.3905984573.device_name": "/dev/xvdb",
                            "ebs_block_device.3905984573.encrypted": "true",
                            "ebs_block_device.3905984573.iops": "7500",
                            "ebs_block_device.3905984573.snapshot_id": "",
                            "ebs_block_device.3905984573.volume_id": "vol-redacted",
                            "ebs_block_device.3905984573.volume_size": "2500",
                            "ebs_block_device.3905984573.volume_type": "gp2",

plan

  ~ module.mysql-redacted.aws_instance.mysql-cluster-ebs[4]
      ebs_block_device.#:                                "1" => "1"
      ebs_block_device.3905984573.delete_on_termination: "true" => "true"
      ebs_block_device.3905984573.device_name:           "/dev/xvdb" => "/dev/xvdb"
      ebs_block_device.3905984573.encrypted:             "true" => "true"
      ebs_block_device.3905984573.iops:                  "7500" => ""
      ebs_block_device.3905984573.snapshot_id:           "" => <computed>
      ebs_block_device.3905984573.volume_id:             "vol-redacted" => <computed>
      ebs_block_device.3905984573.volume_size:           "2500" => "2500"
      ebs_block_device.3905984573.volume_type:           "gp2" => "gp2"

case 2

state

                            "ebs_block_device.#": "1",
                            "ebs_block_device.3905984573.delete_on_termination": "true",
                            "ebs_block_device.3905984573.device_name": "/dev/xvdb",
                            "ebs_block_device.3905984573.encrypted": "true",
                            "ebs_block_device.3905984573.iops": "0",
                            "ebs_block_device.3905984573.snapshot_id": "",
                            "ebs_block_device.3905984573.volume_id": "vol-redacted",
                            "ebs_block_device.3905984573.volume_size": "500",
                            "ebs_block_device.3905984573.volume_type": "sc1",

plan

  ~ aws_instance.redacted[2]
      ebs_block_device.#:                                "1" => "1"
      ebs_block_device.3905984573.delete_on_termination: "true" => "true"
      ebs_block_device.3905984573.device_name:           "/dev/xvdb" => "/dev/xvdb"
      ebs_block_device.3905984573.encrypted:             "true" => "true"
      ebs_block_device.3905984573.iops:                  "0" => ""
      ebs_block_device.3905984573.snapshot_id:           "" => <computed>
      ebs_block_device.3905984573.volume_id:             "vol-redacted" => <computed>
      ebs_block_device.3905984573.volume_size:           "500" => "500"
      ebs_block_device.3905984573.volume_type:           "sc1" => "sc1"

case 3 (does not have this issue)

state

                            "ebs_block_device.#": "1",
                            "ebs_block_device.3905984573.delete_on_termination": "true",
                            "ebs_block_device.3905984573.device_name": "/dev/xvdb",
                            "ebs_block_device.3905984573.encrypted": "true",
                            "ebs_block_device.3905984573.iops": "100",
                            "ebs_block_device.3905984573.snapshot_id": "",
                            "ebs_block_device.3905984573.volume_id": "vol-redacted",
                            "ebs_block_device.3905984573.volume_size": "15",
                            "ebs_block_device.3905984573.volume_type": "gp2",

plan

No changes. Infrastructure is up-to-date.

@hylaride
Copy link
Author

hylaride commented May 7, 2019

@matthiasr Yes, applying them is safe, but the "changes" just show up again on another plan/apply.

@hylaride
Copy link
Author

hylaride commented May 9, 2019

@mattburgess Also, FYI I recreated the instances and the problem persists.

@leighmhart
Copy link
Contributor

leighmhart commented May 10, 2019

FYI also adding iops to the ebs_block_device section of the instances doesn't fix this - it ignores the code and still wants to => ""

@sethbacon
Copy link

sethbacon commented May 16, 2019

I have the same thing occur with tagging.

@sethbacon
Copy link

sethbacon commented Jun 5, 2019

@bflad Anyone available to take a look at this. This causes issues in our TFE because we have multiple workspaces connected to the same root repo, so every push to the repo causes a plan on multiple workspaces which in turn triggers this issue. We have to go in and periodically cancel, discard, or just approve the change so the workspace isn't later blocked.

@matthiasr
Copy link
Contributor

matthiasr commented Jun 5, 2019

I'm working around this issue by rebuilding all the affected instances. Not a good solution but works for our context.

@hylaride
Copy link
Author

hylaride commented Jun 5, 2019

For the record, I replaced all my instances and this is still hitting me.

@matthiasr
Copy link
Contributor

matthiasr commented Jun 5, 2019

For me, it goes away if I replace instances using the newest provider version.

@stibi
Copy link

stibi commented Jun 10, 2019

I see the same problem. My context is aws_spot_instance_request and also count is in play here.

  ~ module.project-be-test.aws_spot_instance_request.app[0]
      ebs_block_device.#:                                "1" => "1"
      ebs_block_device.3905984573.delete_on_termination: "true" => "true"
      ebs_block_device.3905984573.device_name:           "/dev/xvdb" => "/dev/xvdb"
      ebs_block_device.3905984573.encrypted:             "false" => <computed>
      ebs_block_device.3905984573.iops:                  "100" => ""
      ebs_block_device.3905984573.snapshot_id:           "" => <computed>
      ebs_block_device.3905984573.volume_id:             "vol-01a57d5de90e5b2e5" => <computed>
      ebs_block_device.3905984573.volume_size:           "20" => "20"
      ebs_block_device.3905984573.volume_type:           "gp2" => "gp2"

  ~ module.project-be-test.aws_spot_instance_request.app[1]
      ebs_block_device.#:                                "1" => "1"
      ebs_block_device.3905984573.delete_on_termination: "true" => "true"
      ebs_block_device.3905984573.device_name:           "/dev/xvdb" => "/dev/xvdb"
      ebs_block_device.3905984573.encrypted:             "false" => <computed>
      ebs_block_device.3905984573.iops:                  "100" => ""
      ebs_block_device.3905984573.snapshot_id:           "" => <computed>
      ebs_block_device.3905984573.volume_id:             "vol-05f70c7c194f3189c" => <computed>
      ebs_block_device.3905984573.volume_size:           "20" => "20"
      ebs_block_device.3905984573.volume_type:           "gp2" => "gp2"

  ~ module.project-be-test.aws_spot_instance_request.micro[0]
      ebs_block_device.#:                                "1" => "1"
      ebs_block_device.3905984573.delete_on_termination: "true" => "true"
      ebs_block_device.3905984573.device_name:           "/dev/xvdb" => "/dev/xvdb"
      ebs_block_device.3905984573.encrypted:             "false" => <computed>
      ebs_block_device.3905984573.iops:                  "100" => ""
      ebs_block_device.3905984573.snapshot_id:           "" => <computed>
      ebs_block_device.3905984573.volume_id:             "vol-0fcfb0427da6243eb" => <computed>
      ebs_block_device.3905984573.volume_size:           "20" => "20"
      ebs_block_device.3905984573.volume_type:           "gp2" => "gp2"

  ~ module.project-be-test.aws_spot_instance_request.micro[1]
      ebs_block_device.#:                                "1" => "1"
      ebs_block_device.3905984573.delete_on_termination: "true" => "true"
      ebs_block_device.3905984573.device_name:           "/dev/xvdb" => "/dev/xvdb"
      ebs_block_device.3905984573.encrypted:             "false" => <computed>
      ebs_block_device.3905984573.iops:                  "100" => ""
      ebs_block_device.3905984573.snapshot_id:           "" => <computed>
      ebs_block_device.3905984573.volume_id:             "vol-08539a7db96434fc6" => <computed>
      ebs_block_device.3905984573.volume_size:           "20" => "20"
      ebs_block_device.3905984573.volume_type:           "gp2" => "gp2"
❯ terraform -v
Terraform v0.11.14
+ provider.aws v2.14.0

@matthiasr
Copy link
Contributor

matthiasr commented Jun 12, 2019

I take it back, right after replacing instances it was gone but now I see this diff again.

@houstonj1
Copy link

houstonj1 commented Jun 13, 2019

Seeing this issue as well with iops on ebs_block_device

Terraform v0.11.13
+ provider.aws v2.14.0

@danbudrisef
Copy link

danbudrisef commented Jun 20, 2019

I'm seeing this issue as well, with

Terraform v0.11.14
provider.aws v2.15.0

This is a big problem -- we have dozens of instances, and they're all being effected by this. It's not practical to rebuild them. Anyone know a workaround?

We're creating instances from a module which includes the root volume attached to the instance. We'll update the module to include a volume attachment, rather than an embedded volume. In the mean time we're getting the below:

  ~ module.ServerThing.aws_instance.server
      ebs_block_device.#:                                "1" => "1"
      ebs_block_device.1635559443.delete_on_termination: "true" => "true"
      ebs_block_device.1635559443.device_name:           "/dev/sdf" => "/dev/sdf"
      ebs_block_device.1635559443.encrypted:             "false" => "false"
      ebs_block_device.1635559443.iops:                  "450" => ""
      ebs_block_device.1635559443.snapshot_id:           "" => <computed>
      ebs_block_device.1635559443.volume_id:             "vol-0c628281fdf0fb9e8" => <computed>
      ebs_block_device.1635559443.volume_size:           "150" => "150"
      ebs_block_device.1635559443.volume_type:           "gp2" => "gp2"

@cayla
Copy link

cayla commented Jun 20, 2019

@danbudrisef I had this issue when I was getting everything upgraded to 0.12. Since I got everything on the latest version (terraform and providers), the issue went away.

But before I completed the upgrade, I was able to suppress this issue by using ignore_changes on (iirc) iops.

So something like:

  lifecycle {
    create_before_destroy = true
    ignore_changes        = ["iops", "user_data", "ami"]
  }

Sorry I don't remember the exact details -- like I said, the issue went away from me once everything was upgraded to latest and greatest.

EDIT: versions for ref:

Terraform v0.12.2
+ provider.archive v1.2.2
+ provider.aws v2.12.0
+ provider.cloudflare v1.15.0

@aeschright aeschright added the needs-triage Waiting for first response or review from a maintainer. label Jun 24, 2019
@hylaride
Copy link
Author

hylaride commented Jul 4, 2019

Just adding an update here that 0.12 seems to have suppressed the error for me, as well. I'm not sure if hashicorp now considers this "fixed", so I'll leave this open. I'll not care either way if a hashicorp employee closes it. :-)

@jstaf
Copy link

jstaf commented Jul 29, 2019

I still see this issue on Terraform 0.12.5. The only change in 0.12 is that the actual "we're going to make a change" message has been suppressed in the new output format (it says instances will be "changed in-place" without actually saying what those changes are going to be). See https://github.com/hashicorp/terraform/issues/22175 for an example of the iops changes getting suppressed in the output, yet still attempting to change the instances.

These were the only two workarounds that actually worked for me:

  • Downgrading the AWS provider to version 2.6 (which also forces you to stay on terraform versions <0.12).
  • The following also seems to work to ignore changes in Terraform 0.11.x, but not in 0.12.x (I was unable to ignore changes to any part of the ebs_block_device in Terraform 0.12.x - the following seems to have no effect in 0.12, regardless of what syntax I tried):
lifecycle {
    ignore_changes = ["ebs_block_device"]
}

@ghost
Copy link

ghost commented Aug 12, 2019

I am having a similar issue in Terraform 0.12.6 with aws provider 2.23.0.
I have an AMI that was created with a root drive and 4 additional drives.
I create an aws_instance with the AMI and use the root_block_device and 4 ebs_block_device entries to modify the KMS key and volume sizes. I have verified that the device names in the ebs_block_device entries exactly match the values listed in the AMI block device mapping.

The instance appears to spin up correctly, but if I immediately run another plan/apply, all 4 ebs_block_devices are listed as needing to be replaced.

I can force it to work by adding ebs_block_devices to the ignore changes list, but this seems like overkill.

@aeschright aeschright added bug Addresses a defect in current functionality. and removed needs-triage Waiting for first response or review from a maintainer. labels Dec 18, 2019
@scdc-galvin
Copy link

scdc-galvin commented Jan 20, 2020

I am having the same issue with any aws_instance deployed from an ami containing multiple volumes.

In our case we have a packer generated ami with /var and /tmp relocated onto separate drives. The size of each of these additional drives is minimized (1-2GB each) to save space on the box running packer. The aws_instance contains ebs_block_device definitions for all non-root volumes included in the ami.

When the instance is deployed, terraform properly resizes the block devices to the size specified in the ebs_block_device definition. All appears to deploy correctly.

A subsequent terraform plan, immediately after running terraform apply, always wants to replace all ebs_block_devices and consequently all instances. If I comment out the ebs_block_device definitions or add ebs_block_device to ignore_changes, then terraform is happy.

Terraform v0.12.19
provider.aws v2.44.0

@jstaf
Copy link

jstaf commented Jan 20, 2020

I'm not sure if this is helpful, but I stopped running into issues after rm -rf-ing my .terraform directory and rerunning terraform init. I think there was something in there from an earlier Terraform version that was messing me up, and was fixed by pulling down the latest state/versions of the terraform modules/providers I was using.

@fnaoto
Copy link

fnaoto commented Jan 27, 2020

I've got same issue, and fix it.

↓ I'm not sure, but this works for me.

  lifecycle {
    ignore_changes = [
      user_data,
    ]
  }

anyone know about this ??

  • Terraform v0.12.19
  • aws.provider v2.46

@stromnet
Copy link

stromnet commented Jan 31, 2020

Not sure if this is the same issue, but perhaps related:
I have a simple aws_instance´ resource, with *no* ebs_block_devicein the terraform file. But, I have an externalaws_ebs_volume+aws_volume_attachmentwhich attaches the volume to the aws instance. From the docs, I guess that this would be the way to avoid the EBS to be destroyed if the EC2 instance was recreated/changed. This works fine on initial setup, but whenever I run the state again, it picks up a change in theebs_block_device`:

      + ebs_block_device {
          + delete_on_termination = (known after apply)
          + device_name           = (known after apply)
          + encrypted             = (known after apply)
          + iops                  = (known after apply)
          + kms_key_id            = (known after apply)
          + snapshot_id           = (known after apply)
          + volume_id             = (known after apply)
          + volume_size           = (known after apply)
          + volume_type           = (known after apply)
        }
      - ebs_block_device {
          - delete_on_termination = false -> null
          - device_name           = "/dev/sdf" -> null
          - encrypted             = false -> null
          - iops                  = 100 -> null
          - volume_id             = "vol-049121627828c90ed" -> null
          - volume_size           = 20 -> null
          - volume_type           = "gp2" -> null
        }

Adding this does not help:

lifecycle {
    ignore_changes = [
      ebs_block_device,
    ]
  }
Terraform v0.12.20
+ provider.aws v2.45.0
+ provider.null v2.1.2
+ provider.random v2.2.1

@stromnet
Copy link

stromnet commented Feb 24, 2020

A correction for my above comment.. Apparently it was security_groups that caused the above change to be trigged, thus not related to ebs_block_device:

      ~ security_groups              = [ # forces replacement
          + "sg-09bdb6f37a4cd2e4b",
        ]

The instance SG was configured as:

security_groups = [module.prometheus_sg.this_security_group_id]

(where prometheus_sgmodule is a https://github.com/terraform-aws-modules/terraform-aws-security-group).
Looking at terraform show the security_groups attributes was an empty list, but the vpc_security_group_ids had the right ID.
Changing instance config to use vpc_security_group_ids rather than security_groups seems to have resolved this issue, and I now need no ignore_changes.

@MahaElOuni
Copy link

MahaElOuni commented Feb 26, 2020

Finally, it works for me thank you @stromnet

@ghost
Copy link

ghost commented Jun 11, 2020

I'm going to lock this issue because it has been closed for 30 days . This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. Thanks!

@hashicorp hashicorp locked and limited conversation to collaborators Jun 11, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
bug Addresses a defect in current functionality. service/ec2 Issues and PRs that pertain to the ec2 service.
Projects
None yet
Development

No branches or pull requests