Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

No way to migrate from r/aws_s3_bucket_object → r/aws_s3_object #25412

Open
grimm26 opened this issue Jun 16, 2022 · 8 comments
Open

No way to migrate from r/aws_s3_bucket_object → r/aws_s3_object #25412

grimm26 opened this issue Jun 16, 2022 · 8 comments
Labels
question A question about existing functionality; most questions are re-routed to discuss.hashicorp.com. service/s3 Issues and PRs that pertain to the s3 service.

Comments

@grimm26
Copy link
Contributor

grimm26 commented Jun 16, 2022

We all know there is a decent amount of pain involved in making the jump from v3 of the aws provider to v4 with regards to s3 bucket resources, but I’ve hit a bit of a roadblock with r/aws_s3_bucket_object → r/aws_s3_object. Basically, there seems to be no way around doing some sort of manual intervention of either removing the old resource from state and then importing the new one, or the actual manual editing of the state file to just change the resource from aws_s3_bucket_object to aws_s3_object.

Why is this resource not just aliased? Like aws_alb and aws_lb are the same thing. Keep the deprecation message for aws_s3_bucket_object, but let it live on where needed.

@github-actions github-actions bot added the needs-triage Waiting for first response or review from a maintainer. label Jun 16, 2022
@justinretzolk
Copy link
Member

Hey @grimm26 👋 Thank you for taking the time to raise this, and I'm sorry to hear you're having some troubles with the migration. You've nailed the potential paths forward, but for completeness, I'll link to the appropriate section of the v4 upgrade guide. Essentially, there's three possible options:

  1. Allow Terraform to recreate the bucket object on the next apply after moving from aws_s3_bucket_object to aws_s3_object.
  2. Import the aws_s3_object into the state.
  3. (Not recommended, but possible) Modify the state manually, as you described.

As far as why this route was taken rather than aliasing, we had a pretty length discussion with the community around these changes over on #23106, where we discussed the paths we considered, and why we ultimately decided to go this route. I hope that conversation can provide a bit of additional clarity around that decision. We recognize that this change was fairly large and caused a bit of pain to the user experience, and hope to not have to make changes like this too often (which is why we limit them to around once a year, during major version releases), but felt that it was necessary in order to ensure the continued maintainability and functionality of the provider going forward.

@justinretzolk justinretzolk added question A question about existing functionality; most questions are re-routed to discuss.hashicorp.com. service/s3 Issues and PRs that pertain to the s3 service. waiting-response Maintainers are waiting on response from community or contributor. and removed needs-triage Waiting for first response or review from a maintainer. labels Jun 16, 2022
@grimm26
Copy link
Contributor Author

grimm26 commented Jun 17, 2022

@justinretzolk my particular pain in this is that I have a module that we use to create lambdas and it "provisions" the aws_lambda_function with a dummy code file that we upload into place using aws_s3_bucket_object. The new lambda is also plugged into our CI/CD pipeline for actual deployments from here on out. If we trigger a re-upload of that initial dummy object it will wipe out the actual lambda code. We have hundreds of these.

I've read that section of the upgrade guide and there is no automation for this. The cleanest way would seem to be doing a terraform state rm aws_s3_bucket_object.foo && terraform import aws_s3_object.foo blah but I would need to create some sort of automation for this. I can't use moved` block or something. That would be nice. The rest of the upgrade to v4 aws_s3_* resources was actually pretty painless. This is all I have left to get off of pre-v4 deprecated resources because it is the only one that requires state manipulation or a destroy/create.

There is no discussion of aws_s3_bucket_object in particular in the issue that you referenced. For the reasons I have gone over here, I think it is an outlier in this situation because it necessarily causes destruction without state manipulation. For situations like this, terraform needs to provide a moved{} block equivalent, especially since this seems to just be a change of the resource name at this point.

@github-actions github-actions bot removed the waiting-response Maintainers are waiting on response from community or contributor. label Jun 17, 2022
@Fuuzetsu
Copy link

  1. Import the aws_s3_object into the state.

This doesn't seem to work. We started with

resource "aws_s3_bucket_object" "gitlab_config" {
  bucket = aws_s3_bucket.gitlab_config.id

  key     = "config.toml"
  content = module.gitlab_runner_config.config
}

So I made a new resource with same values:

resource "aws_s3_object" "gitlab_config" {
  bucket = aws_s3_bucket.gitlab_config.id
  key ="config.toml"
  content = module.gitlab_runner_config.config
}

Then ran
terraform import <new_resource> <key>

Terraform says it was imported fine but it doesn't actually seem to import much of anything as it wants to modify nearly all the attributes on next apply.

  ~ resource "aws_s3_object" "gitlab_config" {
      + acl                           = "private"
      + cache_control                 = ""
      + content                       = (sensitive)
      + content_disposition           = ""
      + content_encoding              = ""
      + content_language              = ""
      + force_destroy                 = false
        id                            = "config.toml"
      + object_lock_legal_hold_status = ""
      + object_lock_mode              = ""
      + object_lock_retain_until_date = ""
      + server_side_encryption        = ""
        tags                          = {}
      ~ version_id                    = "1WR0x7vvgOTcbOU1nIY0.dp2GtQxCAQd" -> (known after apply)
      + website_redirect              = ""
        # (8 unchanged attributes hidden)
    }

This means I have to manually edit the state, basically copying the attributes from the previous one. For every single S3 object.

Unless I want it to outright replace the object which has various consequences like running event bridge rules and such.

This seems very similar to #17791 except even more annoying to fix as it's not just a couple of deprecated attributes.

@borfig
Copy link

borfig commented Aug 21, 2022

  1. Allow Terraform to recreate the bucket object on the next apply after moving from aws_s3_bucket_object to aws_s3_object.

When I replaced aws_s3_bucket_object with aws_s3_object in my project, I noticed that sometimes the new resource is stuck on creation because the old one was removed after the new one was created.
I am OK with replacing the objects, but how do I ensure that the old resource is destroyed before creating the new one when running a single terraform apply?

@borfig
Copy link

borfig commented Aug 21, 2022

Assuming re-creating the S3 objects is an option, a possible workaround for migrating a for_each resource is as follows.

Let's assume that our initial state is:

locals {
  items = ["foo", "bar", "bas", "xip", "a", "b", "c", "x", "y", "z"]
}

resource "aws_s3_bucket_object" "this" {
  for_each = toset(local.items)
  bucket = "mybucket"
  key = each.key
  content = each.value
}

First, we terraform apply the following:

locals {
  items = ["foo", "bar", "bas", "xip", "a", "b", "c", "x", "y", "z"]
}

resource "aws_s3_bucket_object" "this" {
  for_each = toset([])
  bucket = "mybucket"
  key = each.key
  content = each.value
}

resource "aws_s3_object" "this" {
  depends_on = [
    aws_s3_bucket_object.this
  ]
  for_each = toset(local.items)
  bucket = "mybucket"
  key = each.key
  content = each.value
}

This will remove the aws_s3_bucket_object resources and re-create them as aws_s3_object resources.

In the next terraform apply we can remove the empty aws_s3_bucket_object resource.


With Terraform v1.1 or newer, we can also do the above for a single resource:

Let's assume that our initial state is:

resource "aws_s3_bucket_object" "this" {
  bucket = "mybucket"
  key = each.key
  content = each.value
}

First, we terraform apply the following:

moved {
  from = aws_s3_bucket_object.this
  to   = aws_s3_bucket_object.this[0]
}

resource "aws_s3_bucket_object" "this" {
  count = 0
  bucket = "mybucket"
  key = each.key
  content = each.value
}

resource "aws_s3_object" "this" {
  depends_on = [
    aws_s3_bucket_object.this
  ]
  bucket = "mybucket"
  key = each.key
  content = each.value
}

This will remove the aws_s3_bucket_object resource and re-create it as aws_s3_object resource.

In the next terraform apply we can remove the empty aws_s3_bucket_object resource.


The depends_on setting is required to ensure the creation of the new resources after the old ones are properly destroyed.

@grimm26
Copy link
Contributor Author

grimm26 commented Aug 21, 2022

I ended up biting the bullet and just going through my state files and changing aws_s3_bucket_object resources to aws_s3_object. I wrote a script that would pull down the state file, do the substitutions, bump the serial number, and push it back up. It sucked, but I think it turned out to be the least painful way to get over the hump and just be done with it.

@SimonEdwardsMQA
Copy link

I've added this note to 17791, but putting it here as well for added exposure. When importing an existing S3 bucket, acl & force_destory are missing. Terraform tries to add these to the existing objects. To ignore this, add this to the aws_s3_object resource. The alternative is to edit the state file, which I'm not a fan of doing.

lifecycle {
    ignore_changes = [
      acl,
      force_destroy,
    ]
  }

@lorengordon
Copy link
Contributor

Hopefully the resource will be updated for the new feature in Terraform 1.8:

  • Providers can now transfer the ownership of a remote object between resources of different types, for situations where there are two different resource types that represent the same remote object type.

    This extends the moved block behavior to support moving between two resources of different types only if the provider for the target resource type declares that it can convert from the source resource type. Refer to provider documentation for details on which pairs of resource types are supported.
    https://github.com/hashicorp/terraform/releases/tag/v1.8.0

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question A question about existing functionality; most questions are re-routed to discuss.hashicorp.com. service/s3 Issues and PRs that pertain to the s3 service.
Projects
None yet
Development

No branches or pull requests

6 participants