Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update terraform for GKE clusters #353

Merged
merged 1 commit into from Sep 6, 2019

Conversation

@thockin
Copy link
Member

commented Sep 5, 2019

I went through the gcloud docs and mapped every flag to terraform, and
made sure this represents my best understanding of them.

I ran this against the test project. I think we are just about ready to
do it for real.

Two things I wanted to resolve:

  1. Where should we store the state? It doesn't seem right to ask
    everyone to sign up for an account on another service, when we could
    store state on GCS. How?

  2. I'd like us to agree on how we want to do multiple clusters in a
    single project, because I assume that will be pretty important.

@thockin thockin force-pushed the thockin:terraform-plus-thockin branch from 22d8db9 to 4c46bc2 Sep 5, 2019

@thockin

This comment has been minimized.

Copy link
Member Author

commented Sep 5, 2019

re-pushed with updated config

@thockin

This comment has been minimized.

Copy link
Member Author

commented Sep 5, 2019

@thockin

This comment has been minimized.

Copy link
Member Author

commented Sep 6, 2019

BTW - I have left this cluster up and running for us to verify. I have a few things I want to try, but no time today, probably.

@cblecker

This comment has been minimized.

Copy link
Member

commented Sep 6, 2019

A few changed needed. Patch:

diff --git a/clusters/k8s-infra-dev-cluster-turnup/10-clusterconfiguration.tf b/clusters/k8s-infra-dev-cluster-turnup/10-clusterconfiguration.tf
index 4f24863..f3d4bc1 100644
--- a/clusters/k8s-infra-dev-cluster-turnup/10-clusterconfiguration.tf
+++ b/clusters/k8s-infra-dev-cluster-turnup/10-clusterconfiguration.tf
@@ -58,12 +58,6 @@ resource "google_bigquery_dataset" "usage_metering" {
   delete_contents_on_destroy = true
 }
 
-// This retrives the available GKE versions
-data "google_container_engine_versions" "cluster" {
-  project  = data.google_project.project.id
-  location = local.cluster_location
-}
-
 // Create GKE cluster, but with no node pools. Node pools can be provisioned below
 resource "google_container_cluster" "cluster" {
   name     = local.cluster_name
@@ -81,7 +75,7 @@ resource "google_container_cluster" "cluster" {
   // Network config
   network = "default"
   ip_allocation_policy {
-    use_ip_aliases = true
+    use_ip_aliases    = true
     create_subnetwork = true
   }
 
@@ -156,7 +150,7 @@ resource "google_container_cluster" "cluster" {
 
   // Enable PodSecurityPolicy enforcement
   pod_security_policy_config {
-    enabled = false  // TODO: we should turn this on
+    enabled = false // TODO: we should turn this on
   }
 
   // Enable VPA
diff --git a/clusters/k8s-infra-dev-cluster-turnup/11-pool-1-configuration.tf b/clusters/k8s-infra-dev-cluster-turnup/11-pool-1-configuration.tf
index 41c0d31..2514f5a 100644
--- a/clusters/k8s-infra-dev-cluster-turnup/11-pool-1-configuration.tf
+++ b/clusters/k8s-infra-dev-cluster-turnup/11-pool-1-configuration.tf
@@ -32,15 +32,22 @@ resource "google_container_node_pool" "pool1" {
 
   // Set machine type, and enable all oauth scopes tied to the service account
   node_config {
-    machine_type    = "n1-standard-4"
-    disk_size_gb    = 100
-    disk_type       = "pd-standard"
+    machine_type = "n1-standard-4"
+    disk_size_gb = 100
+    disk_type    = "pd-standard"
 
     service_account = google_service_account.cluster_node_sa.email
     oauth_scopes    = ["https://www.googleapis.com/auth/cloud-platform"]
 
-    // workload_metadata_config is not needed with cluster
-    // workload_identity_config enabled.
+
+    // Needed for workload identity
+    workload_metadata_config {
+      node_metadata = "GKE_METADATA_SERVER"
+    }
+    metadata = {
+      disable-legacy-endpoints = "true"
+    }
+
   }
 
   // If we need to destroy the node pool, create the new one before destroying
Update terraform for GKE clusters
I went through the gcloud docs and mapped every flag to terraform, and
made sure this represents my best understanding of them.

I ran this against the test project.  I think we are just about ready to
do it for real.

Two things I wanted to resolve:

1) Where should we store the state?  It doesn't seem right to ask
everyone to sign up for an account on another service, when we could
store state on GCS.  How?

2) I'd like us to agree on how we want to do multiple clusters in a
single project, because I assume that will be pretty important.

@thockin thockin force-pushed the thockin:terraform-plus-thockin branch from 4c46bc2 to 5c02465 Sep 6, 2019

@thockin

This comment has been minimized.

Copy link
Member Author

commented Sep 6, 2019

All make sense to me but the workload_metadata_config part. Those are defaults. Yet terraform really wants me to have them - it says it will replace the nodepool, but it will come out the same. That seems broken... ?

Regardless, patch applied and pushed.

@thockin

This comment has been minimized.

Copy link
Member Author

commented Sep 6, 2019

Curious: I used the cluster name "thtest" before, but switched it back to "k8s-services-cluster", but the cluster still came up as "thtest". I am going to burn down and repeat.

@thockin

This comment has been minimized.

Copy link
Member Author

commented Sep 6, 2019

I am a doofus and did not refresh my screen. Sorry. Will turn cluster back up :)

@cblecker

This comment has been minimized.

Copy link
Member

commented Sep 6, 2019

All make sense to me but the workload_metadata_config part. Those are defaults.

It depends on the GKE API. Sometimes it omits fields that are unset, sometimes it forces them to a specific setting. In this case the workload identity setting on the cluster forces an explicit setting on the node pool.. we just need to tell TF that this is known and expected.

@thockin

This comment has been minimized.

Copy link
Member Author

commented Sep 6, 2019

Right, after I patched your change in, tf apply showed me no delta, so you are (of course) correct

@cblecker

This comment has been minimized.

Copy link
Member

commented Sep 6, 2019

/lgtm
/approve

@k8s-ci-robot k8s-ci-robot added the lgtm label Sep 6, 2019

@k8s-ci-robot

This comment has been minimized.

Copy link
Contributor

commented Sep 6, 2019

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: cblecker

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot merged commit d423838 into kubernetes:master Sep 6, 2019

2 of 3 checks passed

tide Not mergeable. Needs approved, lgtm labels.
Details
cla/linuxfoundation thockin authorized
Details
pull-k8sio-cip Skipped.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
3 participants
You can’t perform that action at this time.