Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature: Conditionally load tfvars/tf file based on Workspace #15966

Open
atkinchris opened this issue Aug 30, 2017 · 69 comments
Open

Feature: Conditionally load tfvars/tf file based on Workspace #15966

atkinchris opened this issue Aug 30, 2017 · 69 comments

Comments

@atkinchris
Copy link

@atkinchris atkinchris commented Aug 30, 2017

Feature Request

Terraform to conditionally load a .tfvars or .tf file, based on the current workspace.

Use Case

When working with infrastructure that has multiple environments (e.g. "staging", "production"), workspaces can be used to isolate the state for different environments. Often, different variables are needed per workspace. It would be useful if Terraform could conditionally include or load variable file, depending on the workspace.

For example:

application/
|-- main.tf // Always included
|-- staging.tfvars // Only included when workspace === staging
|-- production.tfvars // Only included when workspace === production

Other Thoughts

Conditionally loading a file would be flexible, but possibly powerfully magic. Conditionally loading parts of a .tf/.tfvars file based on workspace, or being able to specify different default values per workspace within a variable, could be more explicit.

@apparentlymart
Copy link
Member

@apparentlymart apparentlymart commented Aug 30, 2017

Hi @atkinchris! Thanks for this suggestion.

We have plans to add per-workspace variables as a backend feature. This means that for the local backend it would look for variables at terraform.d/workspace-name.tfvars (alongside the local states) but in the S3 backend (for example) it could look for variable definitions on S3, keeping the record of the variables in the same place as the record of which workspaces exist. This would also allow more advanced, Terraform-aware backends (such as the one for Terraform Enterprise) to support centralized management of variables.

We were planning to prototype this some more before actually implementing it, since we want to make sure the user experience makes sense here. With the variables stored in the backend we'd probably add a local command to update them from the CLI so that it's not necessary to interact directly with the underlying data store.

At this time we are not planning to support separate configuration files per workspace, since that raises some tricky questions about workflow and architecture. Instead, we plan to make the configuration language more expressive so that it can support more flexible dynamic behavior based on variables, which would then allow you to use the variables-per-workspace feature to activate or deactivate certain behaviors without coupling the configuration directly to specific workspaces.

These items are currently in early planning stages and so no implementation work has yet been done and the details may shift along the way, but this is a direction we'd like to go to make it easier to use workspaces to model differences between environments and other similar use-cases.

@atkinchris
Copy link
Author

@atkinchris atkinchris commented Aug 31, 2017

Awesome, look forward to seeing how workspaces evolve.

We'll keep loading the workspace specific variables with -var-file=staging.tfvars.

@b-dean
Copy link

@b-dean b-dean commented Oct 11, 2017

@apparentlymart is there another github issue that is related to these plans? Something we could subscribe to?

I'm interested in this because we currently have a directory in our repo with env/<short account nickname>-<workspace>.tfvars files and it's a little bit of a pain to have to remember to mention them all the time when doing plans, etc (although it's immediately obvious when you forget it on the plan and nothing looks like you expect, could be dangerous to forget it on apply though).

If these were kept in some backend-specific location, that would be great!

@et304383
Copy link

@et304383 et304383 commented Nov 1, 2017

We just want to reference a different VPC CIDR block based on my workspace. Is there any other workaround that could get us going today?

@apparentlymart
Copy link
Member

@apparentlymart apparentlymart commented Nov 1, 2017

A few common workarounds I've heard about are:

  • Create a map in a named local value whose keys are workspace names and whose values are the values that should vary per workspace. Then use another named local value to index that map with terraform.workspace to get the appropriate value for the current workspace.
  • Place per-workspace settings in some sort of per-workspace configuration store, such as Consul's key/value store, and then use the above technique to select an appropriate Consul server to read from based on the workspace. This way there's only one per-workspace indirection managed directly in Terraform, to find the Consul server, and everything else is obtained from there. Even this map can be avoided with some systematically-created DNS records to help Terraform find a Consul server given the value of terraform.workspace.
  • (For VPCs in particular) Use AWS tags so systematically identify which VPC belongs to which workspace and use the aws_vpc data source to look one up based on tag, to obtain the cidr_block attribute.
@et304383
Copy link

@et304383 et304383 commented Nov 1, 2017

@apparentlymart thanks. I think option one is best. 3 doesn't work as we create the VPC with terraform in the same workspace.

@james-lawrence
Copy link

@james-lawrence james-lawrence commented Nov 29, 2017

@apparentlymart what is the estimated timeline for this functionality, could it be stripped down to just the tfvars and not the dynamic behaviour based on variables? It sounds like you have a pretty solid understanding of how the tfvars being loaded for a particular workspace is going to work.

@apparentlymart
Copy link
Member

@apparentlymart apparentlymart commented Nov 29, 2017

Hi @james-lawrence,

In general we can't comment on schedules and timelines because we work iteratively, and thus there simply isn't a defined schedule for when things get done beyond our current phase of work.

However, we tend to prefer to split up the work by what subsystem it relates to in order to reduce context-switching, since non-trivial changes to Terraform Core tend to require lots of context. For example, in 0.11 the work was focused on the module and provider configuration subsystems because that allowed the team to reload all the context on how modules are loaded, how providers are inherited between modules, etc and thus produce a holistic design.

The work I described above belongs to the "backends" subsystem, so my guess (though definitely subject to change along the way) is that we'd try to bundle this work up with other planned changes for backends, such as the ability to run certain operations on a remote system, ability to retrieve outputs without disclosing the whole state, etc. Unfortunately all I can say right now is that we're not planning to look at this right now, since our current focus is on the configuration language usability and work is already in progress in that area which we want to finish (or, at least, reach a good stopping point) before switching context to backends.

@non7top
Copy link

@non7top non7top commented Nov 29, 2017

That becomes quite hard to manage when you are dealing with multiple aws accounts and terraform workspaces

@ura718
Copy link

@ura718 ura718 commented Dec 19, 2017

Can anyone explain what the difference is between terraform.tfvars and variables.tf file, when to use one over the other? And do you need both or just one is good enough?

@non7top
Copy link

@non7top non7top commented Dec 19, 2017

[variables].tf has definitions and default values, .tfvars has overriding values if needed
You can have single .tf file and several tfvars files each defining different environment

@matti
Copy link

@matti matti commented Jan 23, 2018

Yet another workaround (based on the @apparentlymart 's "first" workaround) that allows you to have workspace variables in different files (easier to diff). When you add new workspaces you only need to a) add the file b) add it to the list in the merge. This is horrible, but works.

workspace1.tf

locals {
  workspace1 = {
    workspace1 = {
      project_name = "project1"
      region_name  = "europe-west1"
    }
  }
}

workspace2.tf

locals {
  workspace2 = {
    workspace2 = {
      project_name = "project2"
      region_name  = "europe-west2"
    }
  }
}

main.tf

locals {
  workspaces = "${merge(local.workspace1, local.workspace2)}"
  workspace  = "${local.workspaces[terraform.workspace]}"
}

output "project_name" {
  value = "${local.workspace["project_name"]}"
}

output "region_name" {
  value = "${local.workspace["region_name"]}"
}
@mhfs
Copy link

@mhfs mhfs commented Feb 15, 2018

Taking @matti's strategy a little further, I like having default values and only customize per workspace as needed. Here's an example:

locals {
  defaults = {
    project_name = "project-default"
    region_name  = "region-default"
  }
}

locals {
  staging = {
    staging = {
      project_name = "project-staging"
    }
  }
}

locals {
  production = {
    production = {
      region_name  = "region-production"
    }
  }
}

locals {
  workspaces = "${merge(local.staging, local.production)}"
  workspace  = "${merge(local.defaults, local.workspaces[terraform.workspace])}"
}

output "workspace" {
  value = "${terraform.workspace}"
}

output "project_name" {
  value = "${local.workspace["project_name"]}"
}

output "region_name" {
  value = "${local.workspace["region_name"]}"
}

When in workspace staging it outputs:

project_name = project-staging
region_name = region-default
workspace = staging

When on workspace production it outputs:

project_name = project-default
region_name = region-production
workspace = production
@tilgovi
Copy link

@tilgovi tilgovi commented Feb 15, 2018

I've been thinking about using Terraform in automation and doing something like -var-file $TF_WORKSPACE.tfvars.

@farman022
Copy link

@farman022 farman022 commented Feb 25, 2018

can someone please give example/template of "Terraform to conditionally load a .tfvars or .tf file, based on the current workspace." Even old way is worked for me. I just wanted to run multiple infra from a single directory.

@landon9720
Copy link

@landon9720 landon9720 commented Apr 6, 2018

@farman022 Just use the -vars-file command line option to point to your workspace-specific vars file.

@bborysenko
Copy link

@bborysenko bborysenko commented Apr 13, 2018

Like @mhfs strategy but with one merge:

locals {

  env = {
    defaults = {
      project_name = "project_default"
      region_name = "region-default"
    }

    staging = {
      project_name = "project-staging"
    }

    production = {
      region_name = "region-production"
    }
  }

  workspace = "${merge(local.env["defaults"], local.env[terraform.workspace])}"
}

output "workspace" {
  value = "${terraform.workspace}"
}

output "project_name" {
  value = "${local.workspace["project_name"]}"
}

output "region_name" {
  value = "${local.workspace["region_name"]}"
}
@menego
Copy link

@menego menego commented Apr 17, 2018

locals {
 
 context_variables = {
	dev = {
		pippo = "pippo-123"
	}
	prod = {
		pippo = "pippo-456"
	}
  }
  
  pippo = "${lookup(local.context_variables[terraform.workspace], "pippo")}"
}

output "LOCALS" {
  value = "${local.pippo}"
}
@ahsannaseem
Copy link

@ahsannaseem ahsannaseem commented Aug 17, 2018

is this feature added in v0.11.7 I tried creating terraform.d with qa.tfvars and prod.tfvars. then select workspace qa. On apply plan it seems that it is not detecting qa.tfvars.

@mildwonkey
Copy link
Member

@mildwonkey mildwonkey commented Aug 20, 2018

No, this hasn't been added yet (current version is v0.11.8).

While we try to follow up with issues like this in Github, sometimes things get lost in the shuffle - you can always check the Changelog for updates.

@hussfelt
Copy link

@hussfelt hussfelt commented Aug 31, 2018

This is a resource that I have used a couple of times as a reference to setup a Makefile wrapping terraform, maybe some of you find it useful:
https://github.com/pgporada/terraform-makefile

@yeswps
Copy link

@yeswps yeswps commented Dec 12, 2019

Not sure if a native feature has been developed. I personally like to keep the tfvars file as flat as possible. Adding additional map made the entire code harder to manage.

I've created a simple bash script, which detects current workspace name, then try to look for the corresponding tfvars to apply:

#!/bin/bash
workspace=$(terraform workspace show)
echo "Current workspace is $workspace"
tfvars_file="$workspace.tfvars"
if test -f $tfvars_file; then
    echo "Found $tfvars_file, applying..."
    terraform apply -var-file=$workspace.tfvars
else
    echo "Cannot find $tfvars_file, will not apply"
fi

@JustinGrote
Copy link

@JustinGrote JustinGrote commented Dec 12, 2019

@yeswps wrappers are easy, the problem is that now you need everyone in the team to know about them and use them. This Issue is about doing this at scale across multiple developers who are probably just going to run "terraform" and not be aware of a wrapper script.

EDIT: Also wrappers don't work in Terraform Cloud/Enterprise :)

@davisford
Copy link

@davisford davisford commented Dec 16, 2019

As of 0.12.2, you can also use YAML for this purpose, which is generally better for config settings. While you can get crazy with a lot of nested maps and lists, generally recommend keeping it a flat "ini-style" if at all possible.

@JustinGrote this is pretty slick, but I'm running into another issue.

I'm using the TF VPC and EKS modules.

For most of these items, overriding is fairly simple with a YAML file. But where it becomes problematic for me is when I want to specify additional_security_group_ids in the EKS worker_groups. Example of the TF:

    {
      instance_type        = "t3.medium"
      asg_max_size         = 10
      asg_desired_capacity = 5
      autoscaling_enabled  = true
      tags = [{
        key                 = "app"
        value               = "api"
        propagate_at_launch = true
      }]
      additional_security_group_ids = [aws_security_group.api_worker_group.id]
    }

Now, I'll look at the definition of api_worker_group:

resource "aws_security_group" "api_worker_group" {
  name   = "${local.cluster_name}-api-wg"
  vpc_id = module.vpc.vpc_id
}

Well, that's an AWS resource that circularly references the vpc id which I don't have in the YAML. The problem with the YAML or JSON approach is that it can't reference other parts of the TF vars.

It gets worse when I want to specify the workers_additional_policies array in the EKS module. The policies I have for one workspace are quite extensive list of Terraform AWS resources. I'm not seeing an easy way to transpose that to YAML aside from filling out literally dozens and dozens of individual fields, thus also causing my default_tfsettings definition to explode.

EDIT : so I can transpose worker groups to YAML like this:

worker_groups:
    - instance_type: "t3.medium"
      asg_max_size: 10
      asg_desired_capacity: 5
      autoscaling_enabled: true
      tags:
        - key: "app"
          value: "api"
          propagate_at_launch: true

But I have no way to define additional_security_group_ids in the YAML. Perhaps I can do a merge here?

@naseemkullah
Copy link

@naseemkullah naseemkullah commented Jan 8, 2020

Hi @atkinchris! Thanks for this suggestion.

We have plans to add per-workspace variables as a backend feature. This means that for the local backend it would look for variables at terraform.d/workspace-name.tfvars (alongside the local states) but in the S3 backend (for example) it could look for variable definitions on S3, keeping the record of the variables in the same place as the record of which workspaces exist. This would also allow more advanced, Terraform-aware backends (such as the one for Terraform Enterprise) to support centralized management of variables.

We were planning to prototype this some more before actually implementing it, since we want to make sure the user experience makes sense here. With the variables stored in the backend we'd probably add a local command to update them from the CLI so that it's not necessary to interact directly with the underlying data store.

At this time we are not planning to support separate configuration files per workspace, since that raises some tricky questions about workflow and architecture. Instead, we plan to make the configuration language more expressive so that it can support more flexible dynamic behavior based on variables, which would then allow you to use the variables-per-workspace feature to activate or deactivate certain behaviors without coupling the configuration directly to specific workspaces.

These items are currently in early planning stages and so no implementation work has yet been done and the details may shift along the way, but this is a direction we'd like to go to make it easier to use workspaces to model differences between environments and other similar use-cases.

Hi @apparentlymart, what's the status of the variables-per-workspace feature today? As our org is currently doing a major refactor on Terraform related configs we'd like to know what the best practice is.

Although the wrapper suggested by @yeswps is correct, I also agree with @JustinGrote 's point on having to ensure the script is used across the org rather than vanilla terraform command. Nothing of the sort has been integrated into terraform yet I guess?

Or should we just share knowledge to our org's TF users that they must select the correct workspace and point to it's correct tfvars file if not they configuring resources wrong.

@jakubgs
Copy link

@jakubgs jakubgs commented Jan 8, 2020

Pretty sure this has already been answered sufficiently in #15966 (comment).
I don't really see why this is still open.

@naseemkullah
Copy link

@naseemkullah naseemkullah commented Jan 8, 2020

Pretty sure this has already been answered sufficiently in #15966 (comment).
I don't really see why this is still open.

Thanks for pointing that out @jakubgs

@tristanmorgan
Copy link

@tristanmorgan tristanmorgan commented Jan 31, 2020

Related and slightly hacky, In Terraform Cloud set an environment variable (not terraform var) called TF_CLI_ARGS to '-var-file=your-enviroment-name.tfvars' and the plan and apply will use it.

@dcow
Copy link

@dcow dcow commented Feb 4, 2020

Pretty sure this has already been answered sufficiently in #15966 (comment).
I don't really see why this is still open.

Not really. You should be able to pin a var-file to a workspace in such a way as to avoid entering a big list of blessed workspaces in your terraform.

Ideal workflow:

$ terraform workspace new foo -var-file ephemeral/tfvars

Then any future uses of -var-file (if needed) in commands like plan or apply, simply merges in the other vars on top.

The main point is that if you follow a workspace equals environment philosophy, things should be totally and completely isolated. You shouldn't have things concerning other workspaces poisoning your terraform.

@fewbits
Copy link

@fewbits fewbits commented Apr 29, 2020

It would be nice to have some feature like initializing variable values per environment, like:

variable "my_variable" {
  type = string

  default = "generic_value"
  workspace.dev = "specific_value_for_dev_workspace"
}

I started using Terraform recently, and when I saw default inside of the variable syntax in the docs, I had the impression it was related to the workspace.

@fewbits
Copy link

@fewbits fewbits commented Apr 29, 2020

locals {

context_variables = {
dev = {
pippo = "pippo-123"
}
prod = {
pippo = "pippo-456"
}
}

pippo = "${lookup(local.context_variables[terraform.workspace], "pippo")}"
}

output "LOCALS" {
value = "${local.pippo}"
}

@menego, I liked this approach.

This way we can define variable values per workspace in the same file and also ensure that terraform plan will fail if there is not a value defined for the current workspace. And as a bonus: it's also possible to force terraform fail running with the default workspace (demanding the user to choose a specific/valid workspace).

@andrew-sumner
Copy link

@andrew-sumner andrew-sumner commented Apr 29, 2020

I ended up with something similar to @fewbits but keep the environment specific variables in separate yaml files under an "env" folder, eg:

\env
    \dev.yaml
    \test.yaml
    \prod.yaml

An example of one of these files would be:

setting1: value1
setting2: value2

In variables.tf I load the variables using:

locals {
  env = merge(
    yamldecode(file("env/${terraform.workspace}.yaml")),
    { "environment_name" = terraform.workspace }
  )
}

NOTE: adding environment_name is optional, it's something I like to have available

I can then reference them using local.env.setting1 and local.env.environment_name

Assuming you don't create a file called default.yaml you will get an error if someone tries to use the default workspace.

@jeffmccollum
Copy link

@jeffmccollum jeffmccollum commented May 5, 2020

Related and slightly hacky, In Terraform Cloud set an environment variable (not terraform var) called TF_CLI_ARGS to '-var-file=your-enviroment-name.tfvars' and the plan and apply will use it.

With Terraform Enterprise, using this will error. Use TF_CLI_ARGS_plan instead as this will only use the -var-file for plan, instead of every terraform command.

@dinvlad
Copy link

@dinvlad dinvlad commented May 9, 2020

Could someone confirm if the above solution with using a YAML/JSON file, basically prevents us from using variable declarations for these values? I.e. all environment-specific values are accessed through locals now, while only "shared" values can still be accessed via variables?

@andrew-sumner
Copy link

@andrew-sumner andrew-sumner commented May 9, 2020

@dinvlad Yes, that is the case. You could put some logic in to use a variable value if it exists but that defeats the purpose of environment specific variable files.

@dinvlad
Copy link

@dinvlad dinvlad commented May 9, 2020

I see, thanks. FWIW, I've preferred to use a simple linking trick to get the benefits of .tfvars files:

ln -sf "env/${PROJECT}.tfvars" "terraform.tfvars"

as part of a custom terraform-init.sh script that also initializes the backend bucket in the same cloud ${PROJECT}.

This way, the 2 are linked together (so a developer can't inadvertently mix them), and the only caveat is we have to ask the team to use this custom init script instead of a standard terraform init. But it avoids the need to use workspaces (obviously, this only works for the case when we have one environment per project).

@johnstrickler
Copy link

@johnstrickler johnstrickler commented May 28, 2020

@dinvlad Can you expand further on your linking trick?

@epomatti
Copy link

@epomatti epomatti commented May 30, 2020

I've seen options to decode and merge local variables with .yaml and .json files.

But is it possible to decode or merge a .tfvars file?

@epomatti
Copy link

@epomatti epomatti commented May 30, 2020

Since I'm using Terraform Cloud I had to use this variable TFC_WORKSPACE_NAME

variable "TFC_WORKSPACE_NAME" {
  type = string
}

locals {
  env = merge(
    yamldecode(file("env/${var.TFC_WORKSPACE_NAME}.yaml"))
  )
}

resource "azurerm_resource_group" "group" {
  name     = local.env.group
  location = local.env.location
}
@pecigonzalo
Copy link

@pecigonzalo pecigonzalo commented Jun 1, 2020

@dinvlad Can you expand further on your linking trick?

Terraform automatically loads terraform.tfvars or any $NAME.auto.tfvars file, so you can use a symlink from a var file to a "linked" file locally with one of those names on initialization and avoid having to pass -var-file=path.


While this (I particulatly use this or the TF_CLI_ARGS one) and some of the other are really clever it really breaks a lot of functionality.

Using maps in vars

While this is a fully valid functionality and keeps all the new TF12 types and etc, it really code that looks a lot more complex, as all values now have to be a map and all assignments have to be a lookup in the map.

Using YAML/JSON and loading to locals

This is great, as it avoids a lot of what I mentioned for the maps in vars, but now you cant take advantage of type checking, var definitions, etc.
This is really exploiting locals to get vars, which while great is IMO a hacky solution.

Symlink/TF_CLI_ARGS/other scripting

This is the current solution I use, as it has netted me the best results, but as said its hard to sync across all devs as all require this wrapper or bootstrap script.
You can use dotenv to automatically assign TF_CLI_ARGS but you have no easy/clean way to tell it your on X workspace when you switch without some script magic that again, everyone has to have.
The problem with TF_CLI_ARGS is that it is a bit broken, and you to set each command you want to set args to instead of being able to set them top level at TF_CLI_ARGS because otherwise some commands break.
You can use zsh or other shell hooks to automatically set those based on the output of terraform workspace show, but again you have to sync all your devs and CI on that script.


This has been open for a while and there have been a couple of good solutions presented here that will simplify all workflows.
IMO, something like #15966 (comment) would be perfect or even something like what its done with the $NAME.auto.tfvars, like having $WORKSPACE_NAME.workspace.tfvars.

@cvemula1
Copy link

@cvemula1 cvemula1 commented Jul 7, 2020

I've existing prod infrastructure which has one variable.tf file and I'm trying to separate it another dev environment which will use the same TF modules as prod but will be different variable files.
Now I've

*dev.tfvars
*prod.tfvars

i'm trying to run
for DEV:

terraform apply -input=false $DEV_PLAN -var-file="dev.tfvars"

FOr PROD:

terraform apply -input=false $PLAN -var-file="prod.tfvars"

The plan looks good but I'm worried about a single state file which defined as S3 backed.

If I run dev apply will it affect my existing state file in s3 bucket? which can cause errors during prod deployment?

@maxgio92
Copy link

@maxgio92 maxgio92 commented Jul 9, 2020

@cvemula1 that's a bit out of scope... Anyway, if I understood your point I think that you should segregate envs with different workspaces or explicitely different resources in the same workspace.
https://www.terraform.io/docs/state/workspaces.html

@fewbits
Copy link

@fewbits fewbits commented Jul 10, 2020

Now I'm using this approach:

variables.tf:

variable "azure_location" {
  type = map
  default = {
    "dev" = "East US 2"
    "qa" = "East US 2"
    "prod" = "Brazil South"
  }
}

resource-groups.tf:

resource "azurerm_resource_group" "my-resource-group" {
  name     = "MY-RESOURCE-GROUP"
  location = var.azure_location[terraform.workspace]
}

This way, when I execute terraform workspace select prod, I get the variables associated with terraform.workspace => prod.

I don't know if this is the best approach, though.

@chrisfowles
Copy link

@chrisfowles chrisfowles commented Jul 11, 2020

@fewbits - I've done a similar pattern in the past. I think it's probably better to use locals instead of variable defaults for this though - unless you really explicitly want to be able to override it from outside the module.

@kyledakid
Copy link

@kyledakid kyledakid commented Sep 26, 2020

Hi @fewbits How can you specify the workspace in terraform block when using your pattern ? ( I'm using Terraform cloud. )

terraform {
  required_version = ">= 0.13.0"

  backend "remote" {
    hostname     = "app.terraform.io"
    organization = "my org"

    workspaces {
      name = "dainfra-dev"  <<< I mean here, how to dynamically load workspace name here if using your pattern ?
 
From the Doc of Terraform Cloud, I can use "prefix = dainfra-" here to apply the code to whole 3 dainfra envs dev, stg, prod. 

But the interpolation of terraform.workspace will always return "default" and cannot use your pattern.
    }
  }
}

@sereinity
Copy link

@sereinity sereinity commented Sep 28, 2020

@kyledakid in all my TF projects I put :

locals {
  workspace = var.workspace != "" ? var.workspace : terraform.workspace
}

For my terraform cloud remote runs I define a variable workspace with the current workspace name in it.
And I always refer to the workspace with local.workspace.

With this, I can reuse my code for remote and local runs,

@kyledakid
Copy link

@kyledakid kyledakid commented Sep 28, 2020

Yes exactly. I found this somewhere in Medium after posting the above question to you. Thank you @sereinity !

@fewbits
Copy link

@fewbits fewbits commented Oct 3, 2020

Hi @fewbits How can you specify the workspace in terraform block when using your pattern ? ( I'm using Terraform cloud. )

terraform {
  required_version = ">= 0.13.0"

  backend "remote" {
    hostname     = "app.terraform.io"
    organization = "my org"

    workspaces {
      name = "dainfra-dev"  <<< I mean here, how to dynamically load workspace name here if using your pattern ?
 
From the Doc of Terraform Cloud, I can use "prefix = dainfra-" here to apply the code to whole 3 dainfra envs dev, stg, prod. 

But the interpolation of terraform.workspace will always return "default" and cannot use your pattern.
    }
  }
}

Hi @kyledakid. I do not use Terraform Cloud (I just use terraform CLI commands in CI/CD pipeline).

@sereinity, nice hint.

@github-usr-name
Copy link

@github-usr-name github-usr-name commented Jan 9, 2021

Avoiding the need to modify main.tf just for a new workspace (and therefore allowing non-SCM'd local workspaces by completely decoupling the workspace settings from the source code):

main.tf

locals {
  workspace_yaml_file = "env/${terraform.workspace}.yaml"
  cluster = {
    nodes = (
      coalescelist(
        var.nodes,
        fileexists(local.workspace_yaml_file)
        ? yamldecode(file(local.workspace_yaml_file))
        : []
      )
    ),
    ssh_authorized_key = var.ssh_public_key_cicd,
    // ....
  }
}

If an explicit -var switch is used to set nodes then that value is selected by the coalescelist function; if not, then it will look for a file matching the workspace/environment name and attempt to decode it as yaml. If that fails then an empty array is returned, which triggers an error from coalescelist. This could obviously be tweaked to use whatever data type you need.

Don't think I've seen a previous solution which completely decouples the workspaces from main.tf - for example, @mhfs otherwise great solution and @bborysenko extension of it both require main.tf having knowledge of the available workspaces (e.g., workspaces = "${merge(local.staging, local.production)}")

@github-usr-name
Copy link

@github-usr-name github-usr-name commented Jan 9, 2021

locals {
  workspace = var.workspace != "" ? var.workspace : terraform.workspace
}

Nice tip @sereinity , I'm stealing it 👍

@joakimhellum
Copy link

@joakimhellum joakimhellum commented Jan 25, 2021

@epomatti

But is it possible to decode or merge a .tfvars file?

There is a feature request for a tfvarsdecode function here: #25584

There is also is an experimental "tfvars" provider in the registry that should allow this:

provider "tfvars" {}

data "tfvars_file" "example" {
  filename = "${terraform.workspace}.tfvars"
}

output "variables" {
  value = data.tfvars_file.example.variables
}

We have been experimenting with this for a while, but not sure if it's a pattern we really want to use. The only advantage we found so far is that by having the tfvars_file data source (or a future tfvarsdecode function), we can simplify our root modules by not having any variables, since we make the variables part of the configuration and not something the developers need to specify for each run. But in most cases we could achieve the same with data-only modules, which is less of a hack.

For us the biggest challenge is that Terraform Cloud workspace is not the same as a Terraform OSS workspace. We hope any new workspace features means less difference between the two.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Linked pull requests

Successfully merging a pull request may close this issue.

None yet