Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

state commands with remote state backends #15652

Merged
merged 6 commits into from Aug 3, 2017
Merged

Conversation

jbardin
Copy link
Member

@jbardin jbardin commented Jul 27, 2017

Some state commands worked with the backend configuration (push, pull, list, show), while others ignored it (mv, rm), leading to confusing behavior when trying to refactor state files. The website docs were also incorrectly updated, and did not match the behavior of the command, making the overall usage of the state commands confusing and possibly dangerous.

This makes the rm and mv commands also work with a configured backend, reflecting the behavior of the other state commands, and the logical expectations of users.

The one piece missing here is that there is no single command to move resources or modules from a local state file to a remote state, or between remote states. There are a number of ways we could introduce the behavior, a special identifier for -state-out like @backend for example, but complex refactoring probably warrants pulling the state into local files anyway, and not all possible options need to be supported. We can put off handing remote targets and workspaces for future refactoring.

Fixes #10481

The default value for the -state flag was overriding the location of any
remote state.
In order to use a backend for the state commands, we need an initialized
meta. Use a single Meta instance rather than temporary ones to make sure
the backends are initialized properly.
These already include detailed messages, and it's not a usage issue,
it's a config or file location issue.
Copy link
Member

@apparentlymart apparentlymart left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me!

Could we be more explicit about the fact that using -state causes the configured backend to be ignored? Currently it's implied by explaining what happens when the flag isn't set ("By default it will...") but perhaps we could use more direct language here, like:

-state=FILE - Ignore the current backend configuration and instead operate on the given local state file.

(I generally prefer to have CLI option docs written as imperative statements, but I see that the others here are written more like descriptions of their arguments; I would be okay with leaving this as-is for now and then deciding separately whether we want to holistically update all the CLI option docs to be imperative statements, if you like.)

to. This defaults to the same statefile. This will
overwrite the destination state file.
-state-out=PATH Path to the destination state file to write to. If this
isn't specified, the source state file will be used. This
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oops, got some tabs in here

Update the documentation to match the current behavior, and make the
usage output and website docs match.
@marcbachmann
Copy link

do you do a release soon? I just ran into the state mv issue.

@dnk8n
Copy link

dnk8n commented Sep 13, 2017

I am aiming to rename the resource in the remote(s3) state file once an instance has passed all tests. from app_deploying to app_deployed. This is so that next time we deploy there would not be a clash in the state files.

Next time we deploy we would first destroy the old 'app_deployed' before renaming the new app_deploying to app_deployed.

Hope the above makes sense and is reasonable. But when trying the command,

terraform state mv aws_instance.app_deploying aws_instance.app_deployed

I get:

Backend reinitialization required. Please run "terraform init".
Reason: Unsetting the previously set backend "s3"

The "backend" is the interface that Terraform uses to store state,
perform operations, etc. If this message is showing up, it means that the
Terraform configuration you're using is using a custom configuration for
the Terraform backend.

Changes to backend configurations require reinitialization. This allows
Terraform to setup the new configuration, copy existing state, etc. This is
only done during "terraform init". Please run that command now then try again.

If the change reason above is incorrect, please verify your configuration
hasn't changed and try again. At this point, no changes to your existing
configuration or state have been made.

Error loading the state: Initialization required. Please see the error message above.

Please ensure that your Terraform state exists and that you've
configured it properly. You can use the "-state" flag to point
Terraform at another state file.

Terraform version: v0.10.4

Is the above command expected to work?

Edit: Please note, I did initialise with terraform init. The resulting config is:

{
    "version": 3,
    "serial": 1,
    "lineage": "00fd58eb-9343-46ec-80cc-16ffa9e443b3",
    "backend": {
        "type": "s3",
        "config": {
            "bucket": "*******-terraform-remote-state",
            "encrypt": true,
            "key": "staging_terraform.tfstate",
            "region": "us-east-1",
            "workspace_key_prefix": "staging"
        },
        "hash": 17683171290306175748
    },
    "modules": [
        {
            "path": [
                "root"
            ],
            "outputs": {},
            "resources": {},
            "depends_on": []
        }
    ]
}

@jbardin
Copy link
Member Author

jbardin commented Sep 13, 2017

@dnk8n,

I doubt the workflow you describe has anything to do with that failure, but it should work as expected.

When you run the state command, terraform is seeing the config has changed for some reason, most likely your backend config is in a subdirectory which you are providing as an argument to plan and apply. If that's the case you need to duplicate the backend config in the current directory for the state command.

It's on our roadmap to try and reconcile how terraform can operate on paths in some cases, but retains it's local config state in the working directory.

@dnk8n
Copy link

dnk8n commented Sep 14, 2017

Thanks @jbardin

I realise the issue. I have to cd in to the directory of the terraform configs in order to submit this command. I initialised with the path at the end on initialization (for some reason I have got into the habit of never using cd when writing automation if I can help it).

Appending the path like I did with init is not an option with terraform state mv though. So in this case I will need to 'cd'. Is this expected behaviour?

@dnk8n
Copy link

dnk8n commented Sep 14, 2017

Re-reading your comment after learning what I wrote about in my last comment makes it more clear now. I think you have already answered my questions. It is expected behaviour for now, but you plan to reconcile.

Thanks

@mukund1989
Copy link

I still don't understand from the docs on how to use terraform state mv on remote state.
Should I explicitly pull to local first ?

I am trying to move certain modules out of the original remote statefile to a new remote statefile.

I am doing the below (my backend is configured to point to the correct bucket and path to the original statefile:

  • tf init
  • tf state mv module.something.something -state-out=newstate.tfstate

At this stage is newstate.tfstate on the local machine ? I do not see it.

@djalexd
Copy link

djalexd commented Aug 6, 2018

@mukund1989 I am able to push the resource to a new state with terraform 0.11.7. However, this is only true if destination is a new file.

When destination exists - source is updated (the resource gets deleted - but I assume the next terraform plan/apply would generate it again), but destination is not! I'll experiment some more and come back with results.

@djalexd
Copy link

djalexd commented Aug 6, 2018

So, it seems to work with some plumbing commands:

For example, if you want to move resource from state A to state B:

aws s3 cp <bucket>/a.terraform.tfstate . (download state)
aws s3 cp <bucket>/b.terraform.tfstate .
terraform state mv -state=a.terraform.tfstate -state-out=b.terraform.tfstate <resource> <resource>
aws s3 cp a.terraform.tfstate ... s3:// (upload back)

This works, but as you can see, it's pretty rudimentary. Perhaps there is another solution :)

@ghost
Copy link

ghost commented Apr 2, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@hashicorp hashicorp locked and limited conversation to collaborators Apr 2, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Docs: Provide some clarity with "terraform state mv" usage and remote states.
6 participants