Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

v1.8 says Backend configuration block has changed when it hasn't #34974

Open
jorhett opened this issue Apr 10, 2024 · 7 comments
Open

v1.8 says Backend configuration block has changed when it hasn't #34974

jorhett opened this issue Apr 10, 2024 · 7 comments
Labels
backend/s3 bug new new issue not yet triaged v1.8 Issues (primarily bugs) reported against v1.8 releases

Comments

@jorhett
Copy link

jorhett commented Apr 10, 2024

Terraform Version

Terraform v1.8.0
on darwin_arm64
+ provider registry.terraform.io/hashicorp/aws v5.43.0

Terraform Configuration Files

terraform {
  backend "s3" {
    bucket         = "terraform-state"
    region         = "us-west-1"
    key            = "test/terraform.tfstate"
  }
}

Debug Output

N/A

Expected Behavior

$ terraform plan test

Acquiring state lock. This may take a few moments...

Actual Behavior

$ terraform plan test

 Error: Backend initialization required: please run "terraform init"
│
│ Reason: Backend configuration block has changed
│
The "backend" is the interface that Terraform uses to store state,
...

Steps to Reproduce

  1. Make a small component with an s3 backend, apply with Terraform 1.7
  2. Switch to Terraform 1.8 and try to generate a plan

Additional Context

No mention of this change is had in the upgrade guide or the changelog.

References

No response

@jorhett jorhett added bug new new issue not yet triaged labels Apr 10, 2024
@apparentlymart
Copy link
Member

Hi @jorhett! Sorry for this misbehavior, and thanks for reporting it.

From your reproduction steps, it sounds like you ran terraform init with Terraform v1.7, and then later ran terraform apply in the same directory with Terraform v1.8, without reinitializing the directory using Terraform v1.8. Is that true?

If so, it would probably help to run terraform init with Terraform v1.8 so it'll have an opportunity to rebuild the cached backend configuration for the working directory to incorporate the changes related to removing "the legacy workflow".

Alternatively, you could start with a fresh working directory (without any existing .terraform subdirectory) and that should produce essentially the same effect: the cached backend configuration will have been created based on the backend's schema from v1.8, and so should match with how the backend is now interpreting what's in your configuration.

If that does fix it for you, then we can change the upgrade guide to mention this additional hazard when reusing a pre-existing working directory. Otherwise, I'll leave this to the S3 provider team (who maintains this backend) for further investigation.

Thanks!

@marcelfrey29
Copy link

Hi @apparentlymart ,
I had the same problem and both of your proposed solutions are working. Thanks!

When keeping the .terraform/ directory, I had to run terraform init -reconfigure.

After deleting the .terraform/ directory, terraform init can be used without the -reconfigure flag.

@jorhett
Copy link
Author

jorhett commented Apr 11, 2024

@apparentlymart No in this case ran init/plan/apply with v7 and then came back and tried to make a plan with v1.8 and was forced to go terraform init -reconfigure

This is breaking the automation toolset for EVERY component in our organization, requiring manual intervention for each and every one... when there was zero change to the code.

Why can't the legacy workflow be removed without forcing every single person to hop out of the car, pop the hood, and apply a change that they didn't make?

At the very least, this should be mentioned in the changelog and upgrade notes. It isn't mentioned in either. But better yet, let's not tell people they changed something which they did not change? At the very least, can you own up to the problem?

Hey we changed the backend config in a backwards-incompatible way, sorry it's not you it was us. Please go run terraform init -reconfigure to make everything happy again.

@jorhett
Copy link
Author

jorhett commented Apr 11, 2024

To be on point here, we cannot just change a release process to automatically run terraform init -reconfigure every time because if the change includes a location change and they need to -migrate-state but instead did -reconfigure then POOF all state is gone.

So if you really are saying that we have to re-init every root module any time a new Terraform version ships, then please give us some command by which we can confirm the backend DID NOT CHANGE and that a blind -reconfigure would be safe, or some other programmatic way that doesn't involve sending people to run raw commands in Terraform modules.

@jorhett
Copy link
Author

jorhett commented Apr 11, 2024

Also, the page you linked to spoke vaguely about the idea with zero mention of impact

Terraform v1.8 completes this deprecation process by removing the use_legacy_workflow argument. The old behavior is no longer available, and so you will need to adopt the new behavior when upgrading to Terraform v1.8.

We have never in the history of our repo (10+ years of Terraform!) used this argument. Therefore it's hard to understand how I'm supposed to know that this sentence means I must manually reinitialize every module before I can plan again.

$ grep -h use_legacy_workflow */.terraform/* 2> /dev/null |sort | uniq
            "use_legacy_workflow": null,

@apparentlymart apparentlymart added the v1.8 Issues (primarily bugs) reported against v1.8 releases label Apr 15, 2024
@danielhanold
Copy link

I'm running into the very same issue as @jorhett described in his previous comments, but wanted to echo all of his concerns and surprise that this wasn't mention clearly in the release notes for Terraform 1.8:

  • Our organization has never used the use_legacy_workflow flag
  • All Terraform plans fail until a manual terraform init -reconfigure command has been executed (we're talking hundreds different broken plans)

This currently breaks all automations we have in place and requires a manual intervention for all Terraform configs.

@surskitt
Copy link

I'm seeing the same problem and I've also never used use_legacy_workflow.

Should we expect to see a patch that will allow us to avoid having to reinitialise state?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
backend/s3 bug new new issue not yet triaged v1.8 Issues (primarily bugs) reported against v1.8 releases
Projects
None yet
Development

No branches or pull requests

5 participants