Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Explanation of local.env and terraform.workspace #1

Closed
taylorturner opened this issue Mar 11, 2019 · 1 comment
Closed

Explanation of local.env and terraform.workspace #1

taylorturner opened this issue Mar 11, 2019 · 1 comment

Comments

@taylorturner
Copy link

First off want to say stellar work man I really appreciate you putting all of this together!

It seems just about every K8 tutorial is around setting everything up from scratch in a new VPC. Ideally I'd like to spin up this TF plan inside my existing VPC so we can begin to migrate from docker swarm to this cluster. My plan was to basically skip the initial stage of the VPC config and then translate the rest of the config to match our existing IDs.

The TF variable file I've been handed down is very basic compared to this, I did read up on what workspaces are and we aren't using them to my knowledge. (Or maybe just using the default one, I've never changed workspaces.)

The two thing's I'm confused on are

  1. What ${terraform.workspace} is going to do, is it just going to use the existing workspace or is it defining a new one?
  2. What is ${local.env} doing? I've noticed it defined in a few places is that simply calling the locals envs?

I'm sure I can take all of that stuff out and dumb it down to match our stuff but if you don't mind explaining how those work I'd really appreciate it!

locals {
  env = "${terraform.workspace}"

  availabilityzone = "${var.AWS_REGION}a"
  availabilityzone2 = "${var.AWS_REGION}b"

  cluster_name= "${local.env}-cluster"

//  NOTE: The usage of the specific kubernetes.io/cluster/*
//  resource tags below are required for EKS and Kubernetes to discover
//  and manage networking resources.

  common_tags = "${map(
    "Environment", "${local.env}",
    "kubernetes.io/cluster/${local.cluster_name}", "shared"
  )}"
}
  name = "EKSClusterRole-${local.env}",
  description = "Allows EKS to manage clusters on your behalf.",
  assume_role_policy = <<POLICY
{
   "Version":"2012-10-17",
   "Statement":[
      {
         "Effect":"Allow",
         "Principal":{
            "Service":"eks.amazonaws.com"
         },
         "Action":"sts:AssumeRole"
      }
   ]
}
POLICY
}```
@Voronenko
Copy link
Owner

terraform workspace - allows you to use the same script to spin-up multiple clusters and store state in the same codebase.

In this specific example - workspace == environment , just name to distinguish clusters

But usually - depending on environment you might want to apply different settings (like image sizes, regions, etc) - then you will have some kind of mappers that map workspace to specific parameters set, see below example

https://github.com/Voronenko/devops_wordpress_demo/blob/master/providers/digitalocean/variables-dictionaries.tf#L4

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants