New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Using variables in terraform backend config block #13022
Comments
I am trying to do something like this; getting the same "configuration cannot contain interpolations" error. While it seems like this is being worked on, I wanted to also ask if this is the right way for me to use access and secret keys? Does it have to be placed here so that I don't have to check the access and secret keys to github
|
I have the same problem i.e. would love to see interpolations in the backend config. Now that we have "environments" in terraform, I was hoping to have a single config.tf with the backend configuration and use environments for my states. The problem is that I want to assume an AWS role based on the environment I'm deploying to. I can do this in "provider" blocks as the provider block allows interpolations so I can assume the relevant role for the environment I'm deploying to, however if I also rely on the role being set for the backend state management (e.g. when running |
I managed to get it working by using AWS profiles instead of the access keys directly. What I did though was not optimal; but in my build steps, I ran a bash script that called AWS configure that ultimately set the default access key and secret. |
We want to archive something similar than @antonosmond. At the moment we use multiple environments prod/stage and want to upload tfstate files to S3.
In this case with above backend definition leads us to this Error:
Now if we try to hardcode it like this:
we get the following notification:
Is there a workaround for this problem at the moment, documentation for backend configuration does not cover working with environments. Solved seems my local test env was still running on terraform 0.9.1, after updating to latest version 0.9.2 it was working for me.
|
Hi,
This is the message when I try to run terraform init
Is this expected behaviour on v0.9.3? Are there any workarounds for this? |
In case it's helpful to anyone, the way I get around this is as follows:
All of the relevant variables are exported at the deployment pipeline level for me, so it's easy to init with the correct information for each environment.
I don't find this ideal, but at least I can easily switch between environments and create new environments without having to edit any terraform. |
@gsirvas @umeat To archive multiple environment with the same backend configuration it is not necessary to use variables/interpolation .It is expected that is not possible to use variables/interpolation in backend configuration see comment from @christofferh. Just write it like this:
Terraform will split and store environment state files in a path like this: |
@NickMetz it's trying to do multiple environments with multiple backend buckets, not a single backend. You can't specify a different backend bucket in terraform environments. In my example you could still use terraform environments to prefix the state file object name, but you get to specify different buckets for the backend. Perhaps it's better to just give accross account access to the user / role which is being used to deploy your terraform. Deploying your terraform to a different account, but using the same backend bucket. Though it's fairly reasonable to want to store the state of an environment in the same account that it's deployed to. |
@umeat in that case you are right, it is not possible at the moment to use different backends for each environment. It would be more comfortable to have a backend mapping for all environments what is not implemented yet. |
Perhaps a middle ground would be to not error out on interpolation when the variable was declared in the environment as |
I also would like to be able to use interpolation in my backend config, using v 0.9.4, confirming this frustrating point still exists. In my use case i need to reuse the same piece of code (without writing a new repo each time i'd want to consume it as a module) to maintain multiple separate statefiles. |
Same thing for me. I am using Terraform v0.9.4.
Here is the error Output of
|
I needs dis! For many features being developed, we want our devs to spin up their own infrastructure that will persist only for the length of time their feature branch exists... to me, the best way to do that would be to use the name of the branch to create the key for the path used to store the tfstate (we're using amazon infrastructure, so in our case, the s3 bucket like the examples above). I've knocked up a bash script which will update TF_VAR_git_branch every time a new command is run from an interactive bash session. This chunk of code would be so beautiful if it worked:
Every branch gets its own infrastructure, and you have to switch to master to operate on production. Switching which infrastructure you're operating against could be as easy as checking out a different git branch. Ideally it'd be set up so everything named "project-name-master" would have different permissions that prevented any old dev from applying to it. It would be an infrastructure-as-code dream to get this working. |
@NickMetz said...
Your top-level structure looks nice and tidy for traditional dev/staging/prod ... sure:
But what if you want to stand up a whole environment for project-specific features being developed in parallel? You'll have a top-level key for each story branch, regardless of which project that story branch is in...
It makes for a mess at the top-level of the directory structure, and inconsistency in what you find inside each story-level dir structure. Full control over the paths is ideal, and we can only get that through interpolation. Ideally I'd want my structure to look like "project/${var.git_branch}/terraform.tfstate", yielding:
Now, everything you find for a given project is under its directory... so long as the env is hard-coded at the beginning of the remote tfstate path, you lose this flexibility. Microservices are better versioned and managed discretely per component, rather than dumped into common prod/staging/dev categories which might be less applicable on a per-microservice basis, each one might have a different workflow with different numbers of staging phases leading to production release. In the example above project1 might not even have staging... and project2 might have unit/regression/load-testing/staging phases leading to production release. |
you'd think at the very least you'd be allowed to use |
In Terraform 0.10 there will be a new setting |
I know a +1 does not add much but yeah, need this too to have 2 different buckets, since we have 2 AWS accounts. |
I was hoping to do the same thing as described in #13603 but the lack of interpolation in the terraform block prevents this. |
+1 |
We need to stop promoting terragrunt. All of the problems it proclaims to solve, in its own "Motivation" section of its docs, are artificial. |
Providing bucket config with cli args manually means that it's possible to use a workspace with the wrong backend config. If there was a single argument that could both specify the workspace and have that automatically use a backend mapped to that workspace then this could be considered safe. The way it is I have to ask everyone who uses terrafrom to be "super duper careful". Conversely terragrunt can be made safe because everything is set based on the directory you are in. I would love to stop using terragrunt but the suggestions here are more error prone and it's difficult to justify unnecessary risk with infrastructure code. |
Have you considered fixing your permission setup? |
Nothing wrong with my permissions setup. |
A lot of us work in multiple aws accounts. I don't want to accidentally have credentials setup for The current method allows plenty of room for human error. |
@ecs-jnguyen we manage dozens of accounts, with states in some of them. mostly only CI has an assume role that can jump to most accounts |
@ecs-jnguyen fix your permissions setup |
@lijok @FernandoMiguel I agree the scenario I just described isn't ideal. That setup does have permissions issues but it is still possible. I was just replying to your permissions comment. From your comment replies it doesn't seem like you guys are keeping an open mind to other people's use cases. My actual use case is: In every account I have a s3 bucket and dynamodb table that follows a specific naming convention. For example s3 would be terraform {
backend "s3" {
bucket = "jnguyen-company-develop-us-west-2-tfbackend"
key = "my_stack/state.tf"
region = "us-west-2"
dynamodb_table = "tfstate-lock-develop"
}
} |
You can. Create a backend yaml file for each and use the one you need |
@FernandoMiguel That's exactly what I'm trying to avoid. I don't want a backend file and tf vars for each environment. I'd rather just have the tf vars file for each environment. Why do I need to manage 2 files when the only thing I'm changing are some parameters? What if for some reason we decide to change the company name and company policy mandates that we change the bucket names? (again obviously not an ideal situation) You guys are saying to stop promoting terragrunt because they solve artificial problems. I agree most of the problems they are solving are artificial. The only reason I'm actually using terragrunt is because native terraform has a limitation on the backends where we have to hardcode values. |
Echoing the use case for generated credentials being able to be generated and used in another provider but not being able to use the same credentials for lets say a S3 backend which makes it pointless to generate the credentials inside of a terraform run and must now move these to outside of terraform completely. References: |
If you have a factory that makes street gates, does it not have to move one of them outside to install in the factory entrance? |
I am not sure whether this reason is enough to justify using a whole wrapper framework on top of terraform
lol what? |
I agree with that statement. I don't really want to use terragrunt, but its the only way I can use variables to populate my backend information. I wish terraform did this natively. |
why not use some simple shell script with variable substitution instead? |
Right now we also met the same issue. We use workspaces for different AWS environments and wanted to use different buckets for each workspace, but it looks like it is not possible. Using separate config file during each TF run is not useful at all. The same with wrapper. Are there any chances that we'll have this ability in future versions? |
Hi, @opteemister In other hand if you work with all the environments (workspaces) in one AWS account, you can be authorized once via cli and then use variable files: backend-vars for different buckets; and project-vars for different values inside environments (here is my another comment with a something kind of an instruction #13022 (comment)) I hope that you didn't want to store tf-state in one AWS account, but prepare environments in others as somebody asked here. |
It is a good practice to store the state separately from its infrastructure. Storing in a separate AWS account is a safe method. In this case, when dealing with review/staging deployment, many people may have admin access to the infra but they will not break the state. In the case of production, this will decrease the risk of sensitive data leakage from the state if production access credentials will be compromised. Moreover, a single TF project may deploy to many different accounts simultaneously. So working with different accounts is normal. |
@kolesaev how your suggestions relates to the original request of possibility to use variables in terraform backend? I wrote my comment just to rise the issue up and let people know that more people are desiring that feature. |
We were able to get around this by using backend-config when initializing the Terraform project as shown below.
Reference : https://www.terraform.io/language/settings/backends/configuration |
Frankly it's nuts this hasn't been addressed yet. Anyone wanting to use Terraform in an enterprise environment is not going to be committing their tfstate or their passwords to source control. So why make it so we have to employ workarounds to make something this basic work? On that note, @samirshaik thank you for the workaround, worked like a charm. |
Agreed, issue has been open since 2017 ? Wow :) I'm having to provision an backend.tf and not trying to add
This at least helps my case in configuring the linode object storage as a terraform backend but doesn't mask secrets. Luckily I have my |
I, on the other hand, need to authenticate myself to GCS. I am coding something generic and have obtained an spawn('terraform', [
`-chdir=${chdir}`,
'init',
`-backend-config=bucket=${providerState.bucket.id}`,
`-backend-config=access_token=${providerState.accessToken}`.
]); terraform {
backend "gcs" {}
} I'm also not interested in setting |
I thought it would be possible to deal with it using Terragrunt (but it's not possible - gruntwork-io/terragrunt#2287). So, a temporary workaround: TL;DR: Use // ./templates/infrastructure/gcp/init/main.tf.template
terraform {
backend "gcs" {
access_token = "##ACCESS_TOKEN##"
}
} // ./bin/apply.ts
import spawn from '@npmcli/promise-spawn';
const cloudProvider = "gcp";
const providerState = {
accessToken: '......fa39.a0fAVbZVsrOkaTjH.......', // paste OAuth2 access_token here
project: { name: 'project', id: 'project-id' },
bucket: { name: 'bucket', id: 'bucket-id' },
region: 'europe-central2'
};
const chdir = `templates/infrastructure/${cloudProvider}`;
(async () => {
await spawn('cp', [`${chdir}/init/main.tf.template`, `${chdir}/init/main.tf`]);
await spawn('sed', ['-i', `s/##ACCESS_TOKEN##/${providerState.accessToken}/g`, `${chdir}/init/main.tf`]);
try {
const result = await spawn('terraform', [
`-chdir=${chdir}/init`,
'init',
`-backend-config=bucket=${providerState.bucket.id}`
]);
console.log(result.stdout);
} catch (error) {
console.error(error.stderr);
}
})(); Which in the output will generate us a // ./templates/infrastructure/gcp/init/main.tf
terraform {
backend "gcs" {
access_token = "......fa39.a0fAVbZVsrOkaTjH......."
}
} |
Using things like |
Hi all, judging by the comments above, However, I am trying to use it with backend "s3" {
#...
assume_role_tags = {
somekey = "somevalue"
}
} It seems it's not really possible to set nested key/value in the command line argument: $ terraform init -migrate-state -backend-config="assume_role_tags.somekey=somevalue"
# ...
Initializing the backend...
│ Error: Invalid backend configuration argument
│
│ The backend configuration argument "assume_role_tags.verify" given on the command line is not expected for the selected backend type. Any ideas? |
backend "s3" { $ terraform init -reconfigure Initializing the backend... |
@rootsher With terragrunt just switch the backend to using a generate block and not the terragrunt native backend block. We do interpolation that way which works just fine. |
Terraform Version
v0.9.0
Affected Resource(s)
terraform backend config
Terraform Configuration Files
Expected Behavior
Variables are used to configure the backend
Actual Behavior
Steps to Reproduce
terraform apply
Important Factoids
I wanted to extract these to variables because i'm using the same values in a few places, including in the provider config where they work fine.
The text was updated successfully, but these errors were encountered: