Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AWS access keys stored in .tfstate file and cross account access #4376

Closed
mschurenko opened this issue Dec 17, 2015 · 5 comments
Closed

AWS access keys stored in .tfstate file and cross account access #4376

mschurenko opened this issue Dec 17, 2015 · 5 comments

Comments

@mschurenko
Copy link

This is sort of covered in #1964 but not exactly so I'd thought I'd create a separate issue.

My situation is this:

I have a wrapper script that automatically configures a remote s3 backup if a ./.terraform/terraform.tfstate does not exist. The section that set ups the remote config is similar to this:

terraform \
remote config \
-backend=s3 \
-backend-config="key=foo/terraform.tfstate" \
-backend-config="bucket=some-s3-bucket" \
-backend-config="region=us-west-2" \
-backend-config="access_key=ACCESS_KEY" \
-backend-config="secret_key=SECRET_KEY"

This doc https://www.terraform.io/docs/commands/remote-config.html suggests passing access keys in via environment variables instead so that they don't get stored in the .tfstate file like this:

{
    "version": 1,
    "serial": 8,
    "remote": {
        "type": "s3",
        "config": {
            "access_key": "ACCESS_KEY",
            "bucket": "some-s3-bucket",
            "key": "foo/terraform.tfstate",
            "region": "us-west-2",
            "secret_key": "SECRET_KEY"
        }
    },
    "modules": [

But using environment variables doesn't work for me as the s3 bucket is in one account (I'll call this Master Account) and the resources that terraform is going to manage is in different account (I'll call this Product Account). Currently I set AWS_PROFILE so that it uses the access keys for a user in the Product Account. I have another set of access keys that are used to setup remote config so that terraform can read/write and update the tfstate files in the bucket in the Master Account. If I set the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment variables these take precedence over the profile in AWS_PROFILE and then my wrapper script that invokes terraform can't manage resources in the Product Account.

My questions are as follows:

  1. Why does terraform store the access keys in the .tfstate file at all? Why can't it store it in another file in ./.terraform that does not get synced to s3?
  2. Based off of some Github issues, it seems like the way remote config works might be overhauled(?). If so is there a plan/timeline for this and would it solve this problem?
  3. Ideally I would like to have the AWS profile from the Product Account assume a role in the Master Account which would grant the necessary permissions for the s3 bucket. Will this be supported in the future without having to pass them in via environment variables?

I hope I've clearly explained my situation.

Thanks!

@robomon1
Copy link

robomon1 commented Jan 9, 2016

Totally agree with #1. I would go further and suggest that the whole "remote" section should be stored in a separate .terraform file and only kept local. The "remote" has nothing to do with the "state". Imagine a dev being able to store the remote state file to S3 but have you CI system grab it from http that uses that S3 bucket to serve files. It is the "state" that is important. The location from where I get the state file shouldn't matter.

@daodennis-zz
Copy link

It would be prudent to verify the other providers as well, but also prevent this from happening in general as we add providers and mark secret data appropriately.

I did just do a test run with the GCE provider and did not notice any credentials stored in the tfstate.

@blalor
Copy link
Contributor

blalor commented Oct 20, 2016

I landed here looking for a solution to this problem. The value part of a -backend-config option is not interpolated, so you can't use a variable here. For example -backend-config=access_key=${var.remote_state_access_key}. The value gets used literally and causes an authentication error with AWS. It seems that the only way to not end up with these credentials written to the terraform state is to use the AWS_ACCESS_KEY_ID/AWS_SECRET_ACCESS_KEY environment variables, which can be a problem if, as @mschurenko demonstrated, you want to use multiple accounts.

@teamterraform
Copy link
Contributor

Hi all!

The ability to set credentials directly as arguments is something Terraform offers for pragmatism, but indeed it's best saved only for when it cannot be avoided because then Terraform will store these settings in the cached backend configuration. (Note that as of Terraform 0.9, that's not part of the state snapshot, even though the file it's stored in is still called terraform.tfstate for historical reasons.)

Since this issue was originally opened, we've documented Multi-account AWS Architecture as the canonical way to use Terraform across multiple AWS accounts, which includes the practice of using a different account for the backend than for the provider.

The timeline here isn't totally clear, but we believe that at the time this issue was opened the AWS provider and S3 backend didn't yet have all of the features required to implement what's described in that guide, but the "assume role" support in the AWS provider is the key feature that makes that approach possible, allowing Terraform to work from a single root set of AWS credentials but use those to get temporary, controlled access to deploy into other AWS accounts as needed.

This issue was also filed long enough ago that it predates the Terraform 0.9 backend refactoring itself. The specific problem of the backend configuration being included in the state snapshots is no longer present, because Terraform does now (as this issue suggested) store the backend configuration in a local file, separate from the state snapshots.

With all of this said, we think the main asks of this issue are now met via a combination of architectural changes, feature enhancements in the AWS provider, and improved documentation, and so we're going to close this issue now. Thanks for sharing all of these use-cases, and sorry for the long delay in responding to this issue.

@ghost
Copy link

ghost commented Aug 19, 2019

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@ghost ghost locked and limited conversation to collaborators Aug 19, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

7 participants