Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Using variables in terraform backend config block #13022

Open
glenjamin opened this issue Mar 23, 2017 · 279 comments
Open

Using variables in terraform backend config block #13022

glenjamin opened this issue Mar 23, 2017 · 279 comments

Comments

@glenjamin
Copy link
Contributor

Terraform Version

v0.9.0

Affected Resource(s)

terraform backend config

Terraform Configuration Files

variable "azure_subscription_id" {
    type = "string"
    default = "74732435-e81f-4a43-bf68-ced435436edf"
}
variable "azure_tenant_id" {
    type = "string"
    default = "74732435-e81f-4a43-bf68-ced435436edf"
}
terraform {
    required_version = ">= 0.9.0"
    backend "azure" {
        resource_group_name = "stuff"
        storage_account_name = "morestuff"
        container_name = "terraform"
        key = "yetmorestuff.terraform.tfstate"
        arm_subscription_id = "${var.azure_subscription_id}"
        arm_tenant_id = "${var.azure_tenant_id}"
    }
}

Expected Behavior

Variables are used to configure the backend

Actual Behavior

Error initializing new backend:
Error configuring the backend "azure": Failed to configure remote backend "azure": Couldn't read access key from storage account: Error retrieving keys for storage account "morestuff": autorest#WithErrorUnlessStatusCode: POST https://login.microsoftonline.com/$%7Bvar.azure_tenant_id%7D/oauth2/token?api-version=1.0 failed with 400 Bad Request: StatusCode=400.

Steps to Reproduce

  1. terraform apply

Important Factoids

I wanted to extract these to variables because i'm using the same values in a few places, including in the provider config where they work fine.

@darrensimio
Copy link

darrensimio commented Apr 7, 2017

I am trying to do something like this; getting the same "configuration cannot contain interpolations" error. While it seems like this is being worked on, I wanted to also ask if this is the right way for me to use access and secret keys? Does it have to be placed here so that I don't have to check the access and secret keys to github

terraform {
backend "s3" {
bucket = "ops"
key = "terraform/state/ops-com"
region = "us-east-1"
encrypt = "true"
access_key = "${var.aws_access_key}"
secret_key = "${var.aws_secret_key}"
}
}

@antonosmond
Copy link

I have the same problem i.e. would love to see interpolations in the backend config. Now that we have "environments" in terraform, I was hoping to have a single config.tf with the backend configuration and use environments for my states. The problem is that I want to assume an AWS role based on the environment I'm deploying to. I can do this in "provider" blocks as the provider block allows interpolations so I can assume the relevant role for the environment I'm deploying to, however if I also rely on the role being set for the backend state management (e.g. when running terraform env select) it doesn't work. Instead I have to use the role_arn in the backend config which can't contain the interpolation I need.

@darrensimio
Copy link

I managed to get it working by using AWS profiles instead of the access keys directly. What I did though was not optimal; but in my build steps, I ran a bash script that called AWS configure that ultimately set the default access key and secret.

@wasfree
Copy link
Contributor

wasfree commented Apr 11, 2017

We want to archive something similar than @antonosmond. At the moment we use multiple environments prod/stage and want to upload tfstate files to S3.

## State Backend
terraform {
  backend "s3" {
    bucket  = "mybucket"
    key     = "aws/${var.project}/${var.environment}"
    region  = "eu-central-1"
    profile = "default"
    encrypt = "true"
    lock_table = "terraform"
  }
}

In this case with above backend definition leads us to this Error:

  • terraform.backend: configuration cannot contain interpolations

Now if we try to hardcode it like this:

## State Backend
terraform {
  backend "s3" {
    bucket  = "mybucket"
    key     = "aws/example/prod"
    region  = "eu-central-1"
    profile = "default"
    encrypt = "true"
    lock_table = "terraform"
  }
}

we get the following notification:

Do you want to copy only your current environment?
  The existing backend "local" supports environments and you currently are
  using more than one. The target backend "s3" doesn't support environments.
  If you continue, Terraform will offer to copy your current environment
  "prod" to the default environment in the target. Your existing environments
  in the source backend won't be modified. If you want to switch environments,
  back them up, or cancel altogether, answer "no" and Terraform will abort.

Is there a workaround for this problem at the moment, documentation for backend configuration does not cover working with environments.

Solved

seems my local test env was still running on terraform 0.9.1, after updating to latest version 0.9.2 it was working for me.

Do you want to migrate all environments to "s3"?
  Both the existing backend "local" and the target backend "s3" support
  environments. When migrating between backends, Terraform will copy all
  environments (with the same names). THIS WILL OVERWRITE any conflicting
  states in the destination.
  
  Terraform initialization doesn't currently migrate only select environments.
  If you want to migrate a select number of environments, you must manually
  pull and push those states.
  
  If you answer "yes", Terraform will migrate all states. If you answer
  "no", Terraform will abort.

@gsirvas
Copy link

gsirvas commented Apr 14, 2017

Hi,
I'm trying to the the same as @NickMetz, I'm running terraform 0.9.3

$terraform version
Terraform v0.9.3

This is my code
terraform {
  backend "s3" {
    bucket = "tstbckt27" 
    key = "/${var.env}/t1/terraform.tfstate"
    region = "us-east-1"
  }
}

This is the message when I try to run terraform init

$ terraform init
Initializing the backend...
Error loading backend config: 1 error(s) occurred:

* terraform.backend: configuration cannot contain interpolations

The backend configuration is loaded by Terraform extremely early, before
the core of Terraform can be initialized. This is necessary because the backend
dictates the behavior of that core. The core is what handles interpolation
processing. Because of this, interpolations cannot be used in backend
configuration.

If you'd like to parameterize backend configuration, we recommend using
partial configuration with the "-backend-config" flag to "terraform init".

Is this expected behaviour on v0.9.3?

Are there any workarounds for this?

@umeat
Copy link

umeat commented Apr 15, 2017

In case it's helpful to anyone, the way I get around this is as follows:

terraform {
  backend "s3" {}
}

data "terraform_remote_state" "state" {
  backend = "s3"
  config {
    bucket     = "${var.tf_state_bucket}"
    lock_table = "${var.tf_state_table}"
    region     = "${var.region}"
    key        = "${var.application}/${var.environment}"
  }
}

All of the relevant variables are exported at the deployment pipeline level for me, so it's easy to init with the correct information for each environment.

terraform init \ 
     -backend-config "bucket=$TF_VAR_tf_state_bucket" \ 
     -backend-config "lock_table=$TF_VAR_tf_state_table" \ 
     -backend-config "region=$TF_VAR_region" \ 
     -backend-config "key=$TF_VAR_application/$TF_VAR_environment"

I don't find this ideal, but at least I can easily switch between environments and create new environments without having to edit any terraform.

@wasfree
Copy link
Contributor

wasfree commented Apr 15, 2017

@gsirvas @umeat To archive multiple environment with the same backend configuration it is not necessary to use variables/interpolation .It is expected that is not possible to use variables/interpolation in backend configuration see comment from @christofferh.

Just write it like this:

terraform {
  backend "s3" {
    bucket = "tstbckt27" 
    key = "project/terraform/terraform.tfstate"
    region = "us-east-1"
  }
}

Terraform will split and store environment state files in a path like this:
env:/${var.env}/project/terraform/terraform.tfstate

@umeat
Copy link

umeat commented Apr 15, 2017

@NickMetz it's trying to do multiple environments with multiple backend buckets, not a single backend. You can't specify a different backend bucket in terraform environments. In my example you could still use terraform environments to prefix the state file object name, but you get to specify different buckets for the backend.

Perhaps it's better to just give accross account access to the user / role which is being used to deploy your terraform. Deploying your terraform to a different account, but using the same backend bucket. Though it's fairly reasonable to want to store the state of an environment in the same account that it's deployed to.

@wasfree
Copy link
Contributor

wasfree commented Apr 15, 2017

@umeat in that case you are right, it is not possible at the moment to use different backends for each environment. It would be more comfortable to have a backend mapping for all environments what is not implemented yet.

@apparentlymart apparentlymart changed the title Using variables in terrform backend config block Using variables in terraform backend config block Apr 15, 2017
@joestump
Copy link

Perhaps a middle ground would be to not error out on interpolation when the variable was declared in the environment as TF_VAR_foo? Though this might require making such variables immutable? (Which is fine for my use case; not sure about others.)

@knope
Copy link

knope commented Apr 27, 2017

I also would like to be able to use interpolation in my backend config, using v 0.9.4, confirming this frustrating point still exists. In my use case i need to reuse the same piece of code (without writing a new repo each time i'd want to consume it as a module) to maintain multiple separate statefiles.

@nkhanal0
Copy link

nkhanal0 commented May 10, 2017

Same thing for me. I am using Terraform v0.9.4.

provider "aws" {
	region = "${var.region}"
}

terraform {
	backend "${var.tf_state_backend}" {
		bucket = "${var.tf_state_backend_bucket}"
		key = "${var.tf_state_backend_bucket}/terraform.tfstate"
		region = "${var.s3_location_region}"
	}
}

Here is the error Output of terraform validate:

Error validating: 1 error(s) occurred:

* terraform.backend: configuration cannot contain interpolations

The backend configuration is loaded by Terraform extremely early, before
the core of Terraform can be initialized. This is necessary because the backend
dictates the behavior of that core. The core is what handles interpolation
processing. Because of this, interpolations cannot be used in backend
configuration.

If you'd like to parameterize backend configuration, we recommend using
partial configuration with the "-backend-config" flag to "terraform init".

@kilna-magento
Copy link

I needs dis! For many features being developed, we want our devs to spin up their own infrastructure that will persist only for the length of time their feature branch exists... to me, the best way to do that would be to use the name of the branch to create the key for the path used to store the tfstate (we're using amazon infrastructure, so in our case, the s3 bucket like the examples above).

I've knocked up a bash script which will update TF_VAR_git_branch every time a new command is run from an interactive bash session.

This chunk of code would be so beautiful if it worked:

terraform {
  backend "s3" {
    key          = "project-name-${var.git_branch}.tfstate"
    ...
  }
}

Every branch gets its own infrastructure, and you have to switch to master to operate on production. Switching which infrastructure you're operating against could be as easy as checking out a different git branch. Ideally it'd be set up so everything named "project-name-master" would have different permissions that prevented any old dev from applying to it. It would be an infrastructure-as-code dream to get this working.

@kilna-magento
Copy link

kilna-magento commented Jun 5, 2017

@NickMetz said...

Terraform will split and store environment state files in a path like this:
env:/${var.env}/project/terraform/terraform.tfstate

Your top-level structure looks nice and tidy for traditional dev/staging/prod ... sure:

env:/prod/project1/terraform/terraform.tfstate
env:/prod/project2/terraform/terraform.tfstate
env:/staging/project1/terraform/terraform.tfstate
env:/staging/project2/terraform/terraform.tfstate
env:/dev/project1/terraform/terraform.tfstate
env:/dev/project2/terraform/terraform.tfstate

But what if you want to stand up a whole environment for project-specific features being developed in parallel? You'll have a top-level key for each story branch, regardless of which project that story branch is in...

env:/prod/project1/terraform/terraform.tfstate
env:/prod/project2/terraform/terraform.tfstate
env:/staging/project1/terraform/terraform.tfstate
env:/staging/project2/terraform/terraform.tfstate
env:/story1/project1/terraform/terraform.tfstate
env:/story2/project2/terraform/terraform.tfstate
env:/story3/project2/terraform/terraform.tfstate
env:/story4/project1/terraform/terraform.tfstate
env:/story5/project1/terraform/terraform.tfstate

It makes for a mess at the top-level of the directory structure, and inconsistency in what you find inside each story-level dir structure. Full control over the paths is ideal, and we can only get that through interpolation.

Ideally I'd want my structure to look like "project/${var.git_branch}/terraform.tfstate", yielding:

project1/master/terraform.tfstate
project1/stage/terraform.tfstate
project1/story1/terraform.tfstate
project1/story4/terraform.tfstate
project1/story5/terraform.tfstate
project2/master/terraform.tfstate
project2/stage/terraform.tfstate
project2/story2/terraform.tfstate
project2/story3/terraform.tfstate

Now, everything you find for a given project is under its directory... so long as the env is hard-coded at the beginning of the remote tfstate path, you lose this flexibility.

Microservices are better versioned and managed discretely per component, rather than dumped into common prod/staging/dev categories which might be less applicable on a per-microservice basis, each one might have a different workflow with different numbers of staging phases leading to production release. In the example above project1 might not even have staging... and project2 might have unit/regression/load-testing/staging phases leading to production release.

@2rs2ts
Copy link
Contributor

2rs2ts commented Jul 10, 2017

you'd think at the very least you'd be allowed to use ${terraform.env}...

@apparentlymart
Copy link
Member

In Terraform 0.10 there will be a new setting workspace_key_prefix on the AWS provider to customize the prefix used for separate environments (now called "workspaces"), overriding this env: convention.

@gonzaloserrano
Copy link

I know a +1 does not add much but yeah, need this too to have 2 different buckets, since we have 2 AWS accounts.

@mhowell-ims
Copy link

I was hoping to do the same thing as described in #13603 but the lack of interpolation in the terraform block prevents this.

@Heiko-san
Copy link

+1

@lijok
Copy link

lijok commented Jun 26, 2022

You should look into terragrunt @santichuit

We need to stop promoting terragrunt. All of the problems it proclaims to solve, in its own "Motivation" section of its docs, are artificial.
If you're having trouble with duplicate terraform code, go back to the drawing board and rethink how you've structured your repo
If you're having trouble with the backend config, rethink how you're using workspaces

@lachlankrautz
Copy link

Providing bucket config with cli args manually means that it's possible to use a workspace with the wrong backend config.

If there was a single argument that could both specify the workspace and have that automatically use a backend mapped to that workspace then this could be considered safe.

The way it is I have to ask everyone who uses terrafrom to be "super duper careful". Conversely terragrunt can be made safe because everything is set based on the directory you are in. I would love to stop using terragrunt but the suggestions here are more error prone and it's difficult to justify unnecessary risk with infrastructure code.

@lijok
Copy link

lijok commented Jul 31, 2022

The way it is I have to ask everyone who uses terrafrom to be "super duper careful".

Have you considered fixing your permission setup?

@lachlankrautz
Copy link

Have you considered fixing your permission setup?

Nothing wrong with my permissions setup.

@ecs-jnguyen
Copy link

ecs-jnguyen commented Aug 26, 2022

@lijok

A lot of us work in multiple aws accounts.

I don't want to accidentally have credentials setup for account A and be passing in the backend details for account B. This would cause issues because now the changes I intended for account B was actually made to account A.

The current method allows plenty of room for human error.

@FernandoMiguel
Copy link

A lot of us work in multiple aws accounts.

I don't want to accidentally have credentials setup for account A and be passing in the backend details for account B. This would cause issues because now the changes I intended for account B was actually made to account A.

The current method allows plenty of room for human error.

@ecs-jnguyen we manage dozens of accounts, with states in some of them.
my permissions only let me modify one and only one.
if i need to work on another state, i need to change permissions.

mostly only CI has an assume role that can jump to most accounts

@lijok
Copy link

lijok commented Aug 26, 2022

@lijok

A lot of us work in multiple aws accounts.

I don't want to accidentally have credentials setup for account A and be passing in the backend details for account B. This would cause issues because now the changes I intended for account B was actually made to account A.

The current method allows plenty of room for human error.

@ecs-jnguyen fix your permissions setup
if you need help, let me know

@ecs-jnguyen
Copy link

ecs-jnguyen commented Aug 26, 2022

@lijok @FernandoMiguel I agree the scenario I just described isn't ideal. That setup does have permissions issues but it is still possible. I was just replying to your permissions comment. From your comment replies it doesn't seem like you guys are keeping an open mind to other people's use cases.

My actual use case is: In every account I have a s3 bucket and dynamodb table that follows a specific naming convention. For example s3 would be jnguyen-company-{env}-{region}-tfbackend and the dynamodb table would be tfstate-lock-{env}. It would be nice if I could have a variable file that specifies stack_name, environment, region. Then using a variable file for each environment the resulting backend would populate the bucket, key, region, dynamo_table correctly:

terraform {
  backend "s3" {
    bucket  = "jnguyen-company-develop-us-west-2-tfbackend"
    key     = "my_stack/state.tf"
    region  = "us-west-2"
    dynamodb_table = "tfstate-lock-develop"
  }
}

@FernandoMiguel
Copy link

@lijok @FernandoMiguel I agree the scenario I just described isn't ideal. That setup does have permissions issues but it is still possible. I was just replying to your permissions comment. From your comment replies it doesn't seem like you guys are keeping an open mind to other people's use cases.

My actual use case is: In every account I have a s3 bucket and dynamodb table that follows a specific naming convention. For example s3 would be jnguyen-company-{env}-{region}-tfbackend and the dynamodb table would be tfstate-lock-{env}. It would be nice if I could have a variable file that specifies stack_name, environment, region. Then using a variable file for each environment the resulting backend would populate the bucket, key, region, dynamo_table correctly:

terraform {
  backend "s3" {
    bucket  = "jnguyen-company-develop-us-west-2-tfbackend"
    key     = "my_stack/state.tf"
    region  = "us-west-2"
    dynamodb_table = "tfstate-lock-develop"
  }
}

You can. Create a backend yaml file for each and use the one you need

@ecs-jnguyen
Copy link

ecs-jnguyen commented Aug 26, 2022

@FernandoMiguel That's exactly what I'm trying to avoid. I don't want a backend file and tf vars for each environment. I'd rather just have the tf vars file for each environment. Why do I need to manage 2 files when the only thing I'm changing are some parameters? What if for some reason we decide to change the company name and company policy mandates that we change the bucket names? (again obviously not an ideal situation)

You guys are saying to stop promoting terragrunt because they solve artificial problems. I agree most of the problems they are solving are artificial. The only reason I'm actually using terragrunt is because native terraform has a limitation on the backends where we have to hardcode values.

@Shocktrooper
Copy link

Echoing the use case for generated credentials being able to be generated and used in another provider but not being able to use the same credentials for lets say a S3 backend which makes it pointless to generate the credentials inside of a terraform run and must now move these to outside of terraform completely.

References:

@FernandoMiguel
Copy link

Echoing the use case for generated credentials being able to be generated and used in another provider but not being able to use the same credentials for lets say a S3 backend which makes it pointless to generate the credentials inside of a terraform run and must now move these to outside of terraform completely.

References:

If you have a factory that makes street gates, does it not have to move one of them outside to install in the factory entrance?

@dimisjim
Copy link

dimisjim commented Sep 15, 2022

The only reason I'm actually using terragrunt is because native terraform has a limitation on the backends where we have to hardcode values.

@ecs-jnguyen

I am not sure whether this reason is enough to justify using a whole wrapper framework on top of terraform

If you have a factory that makes street gates, does it not have to move one of them outside to install in the factory entrance?

lol what? 😅 No, can be done from the inside as well.

@ecs-jnguyen
Copy link

I am not sure whether this reason is enough to justify using a whole wrapper framework on top of terraform

@dimisjim

I agree with that statement. I don't really want to use terragrunt, but its the only way I can use variables to populate my backend information. I wish terraform did this natively.

@dimisjim
Copy link

why not use some simple shell script with variable substitution instead?

@opteemister
Copy link
Contributor

Right now we also met the same issue. We use workspaces for different AWS environments and wanted to use different buckets for each workspace, but it looks like it is not possible.

Using separate config file during each TF run is not useful at all. The same with wrapper.

Are there any chances that we'll have this ability in future versions?

@kolesaev
Copy link

kolesaev commented Sep 19, 2022

Right now we also met the same issue. We use workspaces for different AWS environments and wanted to use different buckets for each workspace, but it looks like it is not possible.

Using separate config file during each TF run is not useful at all. The same with wrapper.

Are there any chances that we'll have this ability in future versions?

Hi, @opteemister
I would suggest you to try looking into running your terraform plan via CI/CD tools. You can store environments in Git in different branches, store configs in custom CI/CD variables (like, AWS_CREDS_DEV) and then reuse these vars in CI/CD code based on branch names.
But it was suggested only for cases when you work in different AWS accounts.

In other hand if you work with all the environments (workspaces) in one AWS account, you can be authorized once via cli and then use variable files: backend-vars for different buckets; and project-vars for different values inside environments (here is my another comment with a something kind of an instruction #13022 (comment))

I hope that you didn't want to store tf-state in one AWS account, but prepare environments in others as somebody asked here.

@speller
Copy link

speller commented Sep 20, 2022

I hope that you didn't want to store tf-state in one AWS account, but prepare environments in others

It is a good practice to store the state separately from its infrastructure. Storing in a separate AWS account is a safe method. In this case, when dealing with review/staging deployment, many people may have admin access to the infra but they will not break the state. In the case of production, this will decrease the risk of sensitive data leakage from the state if production access credentials will be compromised. Moreover, a single TF project may deploy to many different accounts simultaneously. So working with different accounts is normal.

@opteemister
Copy link
Contributor

@kolesaev how your suggestions relates to the original request of possibility to use variables in terraform backend?
Yes, there are many ways how to workaround that limitation. But it doesn't make the life easier. Also all the workarounds are really depend on the specific project and use cases.

I wrote my comment just to rise the issue up and let people know that more people are desiring that feature.

@samirshaik
Copy link

samirshaik commented Sep 20, 2022

We were able to get around this by using backend-config when initializing the Terraform project as shown below.

terraform {
  backend "s3" {
    profile = "default"
    encrypt = "true"
    lock_table = "terraform"
  }
}

$ terraform init \
     -backend-config="bucket=mybucket" \
     -backend-config="key=mykey" \
     -backend-config="region=us-east-1"

Reference : https://www.terraform.io/language/settings/backends/configuration

@lvanatta-fnba
Copy link

Frankly it's nuts this hasn't been addressed yet. Anyone wanting to use Terraform in an enterprise environment is not going to be committing their tfstate or their passwords to source control. So why make it so we have to employ workarounds to make something this basic work? On that note, @samirshaik thank you for the workaround, worked like a charm.

@Saifalkayali
Copy link

Saifalkayali commented Feb 11, 2023

Agreed, issue has been open since 2017 ? Wow :) I'm having to provision an backend.tf and not trying to add access_key and secret_key to git and instead export as an env var as that works locally and in a Pipeline. Can we please add var support in the terraform backend file. Thanks for the save samirshaik. Works great. Although I do see a warning on https://developer.hashicorp.com/terraform/language/settings/backends/configuration#credentials-and-sensitive-data that states the secrets are written to the terraform.tfstate files via this method mentioned:

terraform init \
   -backend-config "access_key=<REDACTED>"

This at least helps my case in configuring the linode object storage as a terraform backend but doesn't mask secrets. Luckily I have my.terraform directory in the .gitignore

@rootsher
Copy link

rootsher commented Feb 20, 2023

I, on the other hand, need to authenticate myself to GCS. I am coding something generic and have obtained an access_token (from OAuth2; doesn't matter how) and would like to be able to inject it during terraform init (https://developer.hashicorp.com/terraform/language/settings/backends/gcs#access_token). I'd like to do something like (sorry, for the wrapper in Node.js, but it will rather be understandable - I didn't want to rewrite it):

spawn('terraform', [
    `-chdir=${chdir}`,
    'init',
    `-backend-config=bucket=${providerState.bucket.id}`,
    `-backend-config=access_token=${providerState.accessToken}`.
]);
terraform {
  backend "gcs" {}
}

I'm also not interested in setting GOOGLE_BACKEND_CREDENTIALS (service account JSON etc.) - in this script I happen to need otherwise - via access_token from OAuth2.

@rootsher
Copy link

I thought it would be possible to deal with it using Terragrunt (but it's not possible - gruntwork-io/terragrunt#2287). So, a temporary workaround:

TL;DR: Use sed to replace the template file and create the target main.tf.

// ./templates/infrastructure/gcp/init/main.tf.template
terraform {
  backend "gcs" {
    access_token = "##ACCESS_TOKEN##"
  }
}
// ./bin/apply.ts
import spawn from '@npmcli/promise-spawn';

const cloudProvider = "gcp";
const providerState = {
    accessToken: '......fa39.a0fAVbZVsrOkaTjH.......', // paste OAuth2 access_token here
    project: { name: 'project', id: 'project-id' },
    bucket: { name: 'bucket', id: 'bucket-id' },
    region: 'europe-central2'
};

const chdir = `templates/infrastructure/${cloudProvider}`;

(async () => {
    await spawn('cp', [`${chdir}/init/main.tf.template`, `${chdir}/init/main.tf`]);
    await spawn('sed', ['-i', `s/##ACCESS_TOKEN##/${providerState.accessToken}/g`, `${chdir}/init/main.tf`]);

    try {
        const result = await spawn('terraform', [
            `-chdir=${chdir}/init`,
            'init',
            `-backend-config=bucket=${providerState.bucket.id}`
        ]);

        console.log(result.stdout);
    } catch (error) {
        console.error(error.stderr);
    }
})();

Which in the output will generate us a main.tf file with an injected access_token and fire off terraform init as a child process.

// ./templates/infrastructure/gcp/init/main.tf
terraform {
  backend "gcs" {
    access_token = "......fa39.a0fAVbZVsrOkaTjH......."
  }
}

@dan-petty
Copy link
Contributor

Using things like basename(path.cwd) also don't work, sadly.

@wtchangdm
Copy link

Hi all, judging by the comments above, -backend-config is probably the preferred way for now.

However, I am trying to use it with assume_role_tags on s3 backend. It looks like:

backend "s3" {
  #...
  assume_role_tags = {
    somekey = "somevalue"
  }
}

It seems it's not really possible to set nested key/value in the command line argument:

$ terraform init -migrate-state -backend-config="assume_role_tags.somekey=somevalue"

# ...
Initializing the backend...
│ Error: Invalid backend configuration argument
│ 
│ The backend configuration argument "assume_role_tags.verify" given on the command line is not expected for the selected backend type.

Any ideas?

@aceqbaceq
Copy link

backend "s3" {
bucket = var.backend_bucket_name
...

$ terraform init -reconfigure

Initializing the backend...

│ Error: Variables not allowed

│ on main.tf line 19, in terraform:
│ 19: bucket = var.backend_bucket_name

│ Variables may not be used here.

@Shocktrooper
Copy link

@rootsher With terragrunt just switch the backend to using a generate block and not the terragrunt native backend block. We do interpolation that way which works just fine.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests