New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature: Conditionally load tfvars/tf file based on Workspace #15966

Open
atkinchris opened this Issue Aug 30, 2017 · 22 comments

Comments

Projects
None yet
@atkinchris

atkinchris commented Aug 30, 2017

Feature Request

Terraform to conditionally load a .tfvars or .tf file, based on the current workspace.

Use Case

When working with infrastructure that has multiple environments (e.g. "staging", "production"), workspaces can be used to isolate the state for different environments. Often, different variables are needed per workspace. It would be useful if Terraform could conditionally include or load variable file, depending on the workspace.

For example:

application/
|-- main.tf // Always included
|-- staging.tfvars // Only included when workspace === staging
|-- production.tfvars // Only included when workspace === production

Other Thoughts

Conditionally loading a file would be flexible, but possibly powerfully magic. Conditionally loading parts of a .tf/.tfvars file based on workspace, or being able to specify different default values per workspace within a variable, could be more explicit.

@apparentlymart

This comment has been minimized.

Contributor

apparentlymart commented Aug 30, 2017

Hi @atkinchris! Thanks for this suggestion.

We have plans to add per-workspace variables as a backend feature. This means that for the local backend it would look for variables at terraform.d/workspace-name.tfvars (alongside the local states) but in the S3 backend (for example) it could look for variable definitions on S3, keeping the record of the variables in the same place as the record of which workspaces exist. This would also allow more advanced, Terraform-aware backends (such as the one for Terraform Enterprise) to support centralized management of variables.

We were planning to prototype this some more before actually implementing it, since we want to make sure the user experience makes sense here. With the variables stored in the backend we'd probably add a local command to update them from the CLI so that it's not necessary to interact directly with the underlying data store.

At this time we are not planning to support separate configuration files per workspace, since that raises some tricky questions about workflow and architecture. Instead, we plan to make the configuration language more expressive so that it can support more flexible dynamic behavior based on variables, which would then allow you to use the variables-per-workspace feature to activate or deactivate certain behaviors without coupling the configuration directly to specific workspaces.

These items are currently in early planning stages and so no implementation work has yet been done and the details may shift along the way, but this is a direction we'd like to go to make it easier to use workspaces to model differences between environments and other similar use-cases.

@atkinchris

This comment has been minimized.

atkinchris commented Aug 31, 2017

Awesome, look forward to seeing how workspaces evolve.

We'll keep loading the workspace specific variables with -var-file=staging.tfvars.

@b-dean

This comment has been minimized.

b-dean commented Oct 11, 2017

@apparentlymart is there another github issue that is related to these plans? Something we could subscribe to?

I'm interested in this because we currently have a directory in our repo with env/<short account nickname>-<workspace>.tfvars files and it's a little bit of a pain to have to remember to mention them all the time when doing plans, etc (although it's immediately obvious when you forget it on the plan and nothing looks like you expect, could be dangerous to forget it on apply though).

If these were kept in some backend-specific location, that would be great!

@et304383

This comment has been minimized.

et304383 commented Nov 1, 2017

We just want to reference a different VPC CIDR block based on my workspace. Is there any other workaround that could get us going today?

@apparentlymart

This comment has been minimized.

Contributor

apparentlymart commented Nov 1, 2017

A few common workarounds I've heard about are:

  • Create a map in a named local value whose keys are workspace names and whose values are the values that should vary per workspace. Then use another named local value to index that map with terraform.workspace to get the appropriate value for the current workspace.
  • Place per-workspace settings in some sort of per-workspace configuration store, such as Consul's key/value store, and then use the above technique to select an appropriate Consul server to read from based on the workspace. This way there's only one per-workspace indirection managed directly in Terraform, to find the Consul server, and everything else is obtained from there. Even this map can be avoided with some systematically-created DNS records to help Terraform find a Consul server given the value of terraform.workspace.
  • (For VPCs in particular) Use AWS tags so systematically identify which VPC belongs to which workspace and use the aws_vpc data source to look one up based on tag, to obtain the cidr_block attribute.
@et304383

This comment has been minimized.

et304383 commented Nov 1, 2017

@apparentlymart thanks. I think option one is best. 3 doesn't work as we create the VPC with terraform in the same workspace.

@james-lawrence

This comment has been minimized.

james-lawrence commented Nov 29, 2017

@apparentlymart what is the estimated timeline for this functionality, could it be stripped down to just the tfvars and not the dynamic behaviour based on variables? It sounds like you have a pretty solid understanding of how the tfvars being loaded for a particular workspace is going to work.

@apparentlymart

This comment has been minimized.

Contributor

apparentlymart commented Nov 29, 2017

Hi @james-lawrence,

In general we can't comment on schedules and timelines because we work iteratively, and thus there simply isn't a defined schedule for when things get done beyond our current phase of work.

However, we tend to prefer to split up the work by what subsystem it relates to in order to reduce context-switching, since non-trivial changes to Terraform Core tend to require lots of context. For example, in 0.11 the work was focused on the module and provider configuration subsystems because that allowed the team to reload all the context on how modules are loaded, how providers are inherited between modules, etc and thus produce a holistic design.

The work I described above belongs to the "backends" subsystem, so my guess (though definitely subject to change along the way) is that we'd try to bundle this work up with other planned changes for backends, such as the ability to run certain operations on a remote system, ability to retrieve outputs without disclosing the whole state, etc. Unfortunately all I can say right now is that we're not planning to look at this right now, since our current focus is on the configuration language usability and work is already in progress in that area which we want to finish (or, at least, reach a good stopping point) before switching context to backends.

@non7top

This comment has been minimized.

non7top commented Nov 29, 2017

That becomes quite hard to manage when you are dealing with multiple aws accounts and terraform workspaces

@ura718

This comment has been minimized.

ura718 commented Dec 19, 2017

Can anyone explain what the difference is between terraform.tfvars and variables.tf file, when to use one over the other? And do you need both or just one is good enough?

@non7top

This comment has been minimized.

non7top commented Dec 19, 2017

[variables].tf has definitions and default values, .tfvars has overriding values if needed
You can have single .tf file and several tfvars files each defining different environment

@matti

This comment has been minimized.

matti commented Jan 23, 2018

Yet another workaround (based on the @apparentlymart 's "first" workaround) that allows you to have workspace variables in different files (easier to diff). When you add new workspaces you only need to a) add the file b) add it to the list in the merge. This is horrible, but works.

workspace1.tf

locals {
  workspace1 = {
    workspace1 = {
      project_name = "project1"
      region_name  = "europe-west1"
    }
  }
}

workspace2.tf

locals {
  workspace2 = {
    workspace2 = {
      project_name = "project2"
      region_name  = "europe-west2"
    }
  }
}

main.tf

locals {
  workspaces = "${merge(local.workspace1, local.workspace2)}"
  workspace  = "${local.workspaces[terraform.workspace]}"
}

output "project_name" {
  value = "${local.workspace["project_name"]}"
}

output "region_name" {
  value = "${local.workspace["region_name"]}"
}
@mhfs

This comment has been minimized.

mhfs commented Feb 15, 2018

Taking @matti's strategy a little further, I like having default values and only customize per workspace as needed. Here's an example:

locals {
  defaults = {
    project_name = "project-default"
    region_name  = "region-default"
  }
}

locals {
  staging = {
    staging = {
      project_name = "project-staging"
    }
  }
}

locals {
  production = {
    production = {
      region_name  = "region-production"
    }
  }
}

locals {
  workspaces = "${merge(local.staging, local.production)}"
  workspace  = "${merge(local.defaults, local.workspaces[terraform.workspace])}"
}

output "workspace" {
  value = "${terraform.workspace}"
}

output "project_name" {
  value = "${local.workspace["project_name"]}"
}

output "region_name" {
  value = "${local.workspace["region_name"]}"
}

When in workspace staging it outputs:

project_name = project-staging
region_name = region-default
workspace = staging

When on workspace production it outputs:

project_name = project-default
region_name = region-production
workspace = production
@tilgovi

This comment has been minimized.

tilgovi commented Feb 15, 2018

I've been thinking about using Terraform in automation and doing something like -var-file $TF_WORKSPACE.tfvars.

@farman022

This comment has been minimized.

farman022 commented Feb 25, 2018

can someone please give example/template of "Terraform to conditionally load a .tfvars or .tf file, based on the current workspace." Even old way is worked for me. I just wanted to run multiple infra from a single directory.

@landon9720

This comment has been minimized.

landon9720 commented Apr 6, 2018

@farman022 Just use the -vars-file command line option to point to your workspace-specific vars file.

@bborysenko

This comment has been minimized.

bborysenko commented Apr 13, 2018

Like @mhfs strategy but with one merge:

locals {

  env = {
    defaults = {
      project_name = "project_default"
      region_name = "region-default"
    }

    staging = {
      project_name = "project-staging"
    }

    production = {
      region_name = "region-production"
    }
  }

  workspace = "${merge(local.env["defaults"], local.env[terraform.workspace])}"
}

output "workspace" {
  value = "${terraform.workspace}"
}

output "project_name" {
  value = "${local.workspace["project_name"]}"
}

output "region_name" {
  value = "${local.workspace["region_name"]}"
}
@menego

This comment has been minimized.

menego commented Apr 17, 2018

locals {
 
 context_variables = {
	dev = {
		pippo = "pippo-123"
	}
	prod = {
		pippo = "pippo-456"
	}
  }
  
  pippo = "${lookup(local.context_variables[terraform.workspace], "pippo")}"
}

output "LOCALS" {
  value = "${local.pippo}"
}
@ahsannaseem

This comment has been minimized.

ahsannaseem commented Aug 17, 2018

is this feature added in v0.11.7 I tried creating terraform.d with qa.tfvars and prod.tfvars. then select workspace qa. On apply plan it seems that it is not detecting qa.tfvars.

@mildwonkey

This comment has been minimized.

Contributor

mildwonkey commented Aug 20, 2018

No, this hasn't been added yet (current version is v0.11.8).

While we try to follow up with issues like this in Github, sometimes things get lost in the shuffle - you can always check the Changelog for updates.

@hussfelt

This comment has been minimized.

hussfelt commented Aug 31, 2018

This is a resource that I have used a couple of times as a reference to setup a Makefile wrapping terraform, maybe some of you find it useful:
https://github.com/pgporada/terraform-makefile

@gudata

This comment has been minimized.

gudata commented Oct 29, 2018

My first thouthgs were that workspaces are great for managing environments but then I found in the docs that they are not recommended. Is this still valid or the context is other?

In particular, organizations commonly want to create a strong separation between multiple deployments of the same infrastructure serving different development stages (e.g. staging vs. production) or different internal teams. In this case, the backend used for each deployment often belongs to that deployment, with different credentials and access controls. Named workspaces are not a suitable isolation mechanism for this scenario.

https://www.terraform.io/docs/state/workspaces.html

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment