Feature: Conditionally load tfvars/tf file based on Workspace #15966
Comments
Hi @atkinchris! Thanks for this suggestion. We have plans to add per-workspace variables as a backend feature. This means that for the local backend it would look for variables at We were planning to prototype this some more before actually implementing it, since we want to make sure the user experience makes sense here. With the variables stored in the backend we'd probably add a local command to update them from the CLI so that it's not necessary to interact directly with the underlying data store. At this time we are not planning to support separate configuration files per workspace, since that raises some tricky questions about workflow and architecture. Instead, we plan to make the configuration language more expressive so that it can support more flexible dynamic behavior based on variables, which would then allow you to use the variables-per-workspace feature to activate or deactivate certain behaviors without coupling the configuration directly to specific workspaces. These items are currently in early planning stages and so no implementation work has yet been done and the details may shift along the way, but this is a direction we'd like to go to make it easier to use workspaces to model differences between environments and other similar use-cases. |
Awesome, look forward to seeing how workspaces evolve. We'll keep loading the workspace specific variables with |
@apparentlymart is there another github issue that is related to these plans? Something we could subscribe to? I'm interested in this because we currently have a directory in our repo with If these were kept in some backend-specific location, that would be great! |
We just want to reference a different VPC CIDR block based on my workspace. Is there any other workaround that could get us going today? |
A few common workarounds I've heard about are:
|
@apparentlymart thanks. I think option one is best. 3 doesn't work as we create the VPC with terraform in the same workspace. |
@apparentlymart what is the estimated timeline for this functionality, could it be stripped down to just the tfvars and not the dynamic behaviour based on variables? It sounds like you have a pretty solid understanding of how the tfvars being loaded for a particular workspace is going to work. |
Hi @james-lawrence, In general we can't comment on schedules and timelines because we work iteratively, and thus there simply isn't a defined schedule for when things get done beyond our current phase of work. However, we tend to prefer to split up the work by what subsystem it relates to in order to reduce context-switching, since non-trivial changes to Terraform Core tend to require lots of context. For example, in 0.11 the work was focused on the module and provider configuration subsystems because that allowed the team to reload all the context on how modules are loaded, how providers are inherited between modules, etc and thus produce a holistic design. The work I described above belongs to the "backends" subsystem, so my guess (though definitely subject to change along the way) is that we'd try to bundle this work up with other planned changes for backends, such as the ability to run certain operations on a remote system, ability to retrieve outputs without disclosing the whole state, etc. Unfortunately all I can say right now is that we're not planning to look at this right now, since our current focus is on the configuration language usability and work is already in progress in that area which we want to finish (or, at least, reach a good stopping point) before switching context to backends. |
That becomes quite hard to manage when you are dealing with multiple aws accounts and terraform workspaces |
Can anyone explain what the difference is between terraform.tfvars and variables.tf file, when to use one over the other? And do you need both or just one is good enough? |
[variables].tf has definitions and default values, .tfvars has overriding values if needed |
Yet another workaround (based on the @apparentlymart 's "first" workaround) that allows you to have workspace variables in different files (easier to diff). When you add new workspaces you only need to a) add the file b) add it to the list in the merge. This is horrible, but works. workspace1.tf
workspace2.tf
main.tf
|
Taking @matti's strategy a little further, I like having default values and only customize per workspace as needed. Here's an example: locals {
defaults = {
project_name = "project-default"
region_name = "region-default"
}
}
locals {
staging = {
staging = {
project_name = "project-staging"
}
}
}
locals {
production = {
production = {
region_name = "region-production"
}
}
}
locals {
workspaces = "${merge(local.staging, local.production)}"
workspace = "${merge(local.defaults, local.workspaces[terraform.workspace])}"
}
output "workspace" {
value = "${terraform.workspace}"
}
output "project_name" {
value = "${local.workspace["project_name"]}"
}
output "region_name" {
value = "${local.workspace["region_name"]}"
} When in workspace
When on workspace
|
I've been thinking about using Terraform in automation and doing something like |
can someone please give example/template of "Terraform to conditionally load a .tfvars or .tf file, based on the current workspace." Even old way is worked for me. I just wanted to run multiple infra from a single directory. |
@farman022 Just use the |
Like @mhfs strategy but with one merge:
|
|
is this feature added in |
No, this hasn't been added yet (current version is While we try to follow up with issues like this in Github, sometimes things get lost in the shuffle - you can always check the Changelog for updates. |
This is a resource that I have used a couple of times as a reference to setup a Makefile wrapping terraform, maybe some of you find it useful: |
Not sure if a native feature has been developed. I personally like to keep the tfvars file as flat as possible. Adding additional map made the entire code harder to manage. I've created a simple bash script, which detects current workspace name, then try to look for the corresponding tfvars to apply:
|
@yeswps wrappers are easy, the problem is that now you need everyone in the team to know about them and use them. This Issue is about doing this at scale across multiple developers who are probably just going to run "terraform" and not be aware of a wrapper script. EDIT: Also wrappers don't work in Terraform Cloud/Enterprise :) |
@JustinGrote this is pretty slick, but I'm running into another issue. I'm using the TF VPC and EKS modules. For most of these items, overriding is fairly simple with a YAML file. But where it becomes problematic for me is when I want to specify {
instance_type = "t3.medium"
asg_max_size = 10
asg_desired_capacity = 5
autoscaling_enabled = true
tags = [{
key = "app"
value = "api"
propagate_at_launch = true
}]
additional_security_group_ids = [aws_security_group.api_worker_group.id]
} Now, I'll look at the definition of resource "aws_security_group" "api_worker_group" {
name = "${local.cluster_name}-api-wg"
vpc_id = module.vpc.vpc_id
} Well, that's an AWS It gets worse when I want to specify the EDIT : so I can transpose worker groups to YAML like this: worker_groups:
- instance_type: "t3.medium"
asg_max_size: 10
asg_desired_capacity: 5
autoscaling_enabled: true
tags:
- key: "app"
value: "api"
propagate_at_launch: true But I have no way to define |
Hi @apparentlymart, what's the status of the variables-per-workspace feature today? As our org is currently doing a major refactor on Terraform related configs we'd like to know what the best practice is. Although the wrapper suggested by @yeswps is correct, I also agree with @JustinGrote 's point on having to ensure the script is used across the org rather than vanilla Or should we just share knowledge to our org's TF users that they must select the correct workspace and point to it's correct |
Pretty sure this has already been answered sufficiently in #15966 (comment). |
Thanks for pointing that out @jakubgs |
Related and slightly hacky, In Terraform Cloud set an environment variable (not terraform var) called TF_CLI_ARGS to '-var-file=your-enviroment-name.tfvars' and the plan and apply will use it. |
Not really. You should be able to pin a var-file to a workspace in such a way as to avoid entering a big list of blessed workspaces in your terraform. Ideal workflow:
Then any future uses of -var-file (if needed) in commands like plan or apply, simply merges in the other vars on top. The main point is that if you follow a |
It would be nice to have some feature like initializing variable values per environment, like: variable "my_variable" {
type = string
default = "generic_value"
workspace.dev = "specific_value_for_dev_workspace"
} I started using Terraform recently, and when I saw |
@menego, I liked this approach. This way we can define variable values per workspace in the same file and also ensure that |
I ended up with something similar to @fewbits but keep the environment specific variables in separate yaml files under an "env" folder, eg:
An example of one of these files would be:
In variables.tf I load the variables using:
NOTE: adding environment_name is optional, it's something I like to have available I can then reference them using Assuming you don't create a file called default.yaml you will get an error if someone tries to use the default workspace. |
With Terraform Enterprise, using this will error. Use |
Could someone confirm if the above solution with using a YAML/JSON file, basically prevents us from using |
@dinvlad Yes, that is the case. You could put some logic in to use a variable value if it exists but that defeats the purpose of environment specific variable files. |
I see, thanks. FWIW, I've preferred to use a simple linking trick to get the benefits of
as part of a custom This way, the 2 are linked together (so a developer can't inadvertently mix them), and the only caveat is we have to ask the team to use this custom init script instead of a standard |
@dinvlad Can you expand further on your linking trick? |
I've seen options to decode and merge local variables with But is it possible to decode or merge a |
Since I'm using Terraform Cloud I had to use this variable variable "TFC_WORKSPACE_NAME" {
type = string
}
locals {
env = merge(
yamldecode(file("env/${var.TFC_WORKSPACE_NAME}.yaml"))
)
}
resource "azurerm_resource_group" "group" {
name = local.env.group
location = local.env.location
} |
Terraform automatically loads While this (I particulatly use this or the Using maps in varsWhile this is a fully valid functionality and keeps all the new TF12 types and etc, it really code that looks a lot more complex, as all values now have to be a map and all assignments have to be a lookup in the map. Using YAML/JSON and loading to localsThis is great, as it avoids a lot of what I mentioned for the maps in vars, but now you cant take advantage of type checking, var definitions, etc. Symlink/
|
I've existing prod infrastructure which has one variable.tf file and I'm trying to separate it another dev environment which will use the same TF modules as prod but will be different variable files.
i'm trying to run
FOr PROD:
The plan looks good but I'm worried about a single state file which defined as S3 backed. If I run dev apply will it affect my existing state file in s3 bucket? which can cause errors during prod deployment? |
@cvemula1 that's a bit out of scope... Anyway, if I understood your point I think that you should segregate envs with different workspaces or explicitely different resources in the same workspace. |
Now I'm using this approach: variables.tf: variable "azure_location" {
type = map
default = {
"dev" = "East US 2"
"qa" = "East US 2"
"prod" = "Brazil South"
}
} resource-groups.tf: resource "azurerm_resource_group" "my-resource-group" {
name = "MY-RESOURCE-GROUP"
location = var.azure_location[terraform.workspace]
} This way, when I execute I don't know if this is the best approach, though. |
@fewbits - I've done a similar pattern in the past. I think it's probably better to use locals instead of variable defaults for this though - unless you really explicitly want to be able to override it from outside the module. |
Hi @fewbits How can you specify the workspace in terraform block when using your pattern ? ( I'm using Terraform cloud. )
|
@kyledakid in all my TF projects I put :
For my terraform cloud remote runs I define a variable With this, I can reuse my code for remote and local runs, |
Yes exactly. I found this somewhere in Medium after posting the above question to you. Thank you @sereinity ! |
Hi @kyledakid. I do not use Terraform Cloud (I just use @sereinity, nice hint. |
Avoiding the need to modify main.tf
If an explicit Don't think I've seen a previous solution which completely decouples the workspaces from |
Nice tip @sereinity , I'm stealing it |
There is a feature request for a There is also is an experimental "tfvars" provider in the registry that should allow this: provider "tfvars" {}
data "tfvars_file" "example" {
filename = "${terraform.workspace}.tfvars"
}
output "variables" {
value = data.tfvars_file.example.variables
} We have been experimenting with this for a while, but not sure if it's a pattern we really want to use. The only advantage we found so far is that by having the For us the biggest challenge is that Terraform Cloud workspace is not the same as a Terraform OSS workspace. We hope any new workspace features means less difference between the two. |
Feature Request
Terraform to conditionally load a
.tfvars
or.tf
file, based on the current workspace.Use Case
When working with infrastructure that has multiple environments (e.g. "staging", "production"), workspaces can be used to isolate the state for different environments. Often, different variables are needed per workspace. It would be useful if Terraform could conditionally include or load variable file, depending on the workspace.
For example:
Other Thoughts
Conditionally loading a file would be flexible, but possibly powerfully magic. Conditionally loading parts of a
.tf
/.tfvars
file based on workspace, or being able to specify different default values per workspace within a variable, could be more explicit.The text was updated successfully, but these errors were encountered: