You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
First off want to say stellar work man I really appreciate you putting all of this together!
It seems just about every K8 tutorial is around setting everything up from scratch in a new VPC. Ideally I'd like to spin up this TF plan inside my existing VPC so we can begin to migrate from docker swarm to this cluster. My plan was to basically skip the initial stage of the VPC config and then translate the rest of the config to match our existing IDs.
The TF variable file I've been handed down is very basic compared to this, I did read up on what workspaces are and we aren't using them to my knowledge. (Or maybe just using the default one, I've never changed workspaces.)
The two thing's I'm confused on are
What ${terraform.workspace} is going to do, is it just going to use the existing workspace or is it defining a new one?
What is ${local.env} doing? I've noticed it defined in a few places is that simply calling the locals envs?
I'm sure I can take all of that stuff out and dumb it down to match our stuff but if you don't mind explaining how those work I'd really appreciate it!
locals {
env = "${terraform.workspace}"
availabilityzone = "${var.AWS_REGION}a"
availabilityzone2 = "${var.AWS_REGION}b"
cluster_name= "${local.env}-cluster"
// NOTE: The usage of the specific kubernetes.io/cluster/*
// resource tags below are required for EKS and Kubernetes to discover
// and manage networking resources.
common_tags = "${map(
"Environment", "${local.env}",
"kubernetes.io/cluster/${local.cluster_name}", "shared"
)}"
}
name = "EKSClusterRole-${local.env}",
description = "Allows EKS to manage clusters on your behalf.",
assume_role_policy = <<POLICY
{
"Version":"2012-10-17",
"Statement":[
{
"Effect":"Allow",
"Principal":{
"Service":"eks.amazonaws.com"
},
"Action":"sts:AssumeRole"
}
]
}
POLICY
}```
The text was updated successfully, but these errors were encountered:
terraform workspace - allows you to use the same script to spin-up multiple clusters and store state in the same codebase.
In this specific example - workspace == environment , just name to distinguish clusters
But usually - depending on environment you might want to apply different settings (like image sizes, regions, etc) - then you will have some kind of mappers that map workspace to specific parameters set, see below example
First off want to say stellar work man I really appreciate you putting all of this together!
It seems just about every K8 tutorial is around setting everything up from scratch in a new VPC. Ideally I'd like to spin up this TF plan inside my existing VPC so we can begin to migrate from docker swarm to this cluster. My plan was to basically skip the initial stage of the VPC config and then translate the rest of the config to match our existing IDs.
The TF variable file I've been handed down is very basic compared to this, I did read up on what workspaces are and we aren't using them to my knowledge. (Or maybe just using the default one, I've never changed workspaces.)
The two thing's I'm confused on are
${terraform.workspace}
is going to do, is it just going to use the existing workspace or is it defining a new one?${local.env}
doing? I've noticed it defined in a few places is that simply calling thelocals
envs?I'm sure I can take all of that stuff out and dumb it down to match our stuff but if you don't mind explaining how those work I'd really appreciate it!
The text was updated successfully, but these errors were encountered: