Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

can't provision a cluster with less than than 2 "zones" #43

Closed
czka opened this issue Dec 18, 2018 · 8 comments
Closed

can't provision a cluster with less than than 2 "zones" #43

czka opened this issue Dec 18, 2018 · 8 comments

Comments

@czka
Copy link

czka commented Dec 18, 2018

$ terraform --version
Terraform v0.11.11
+ provider.google v1.19.1

With code as pasted in the bottom-most section of this ticket, which seems valid as per your docs and examples, I'm getting the following error at terraform plan:

Error: Error running plan: 1 error(s) occurred:

* module.gke.local.cluster_type_output_zonal_zones: local.cluster_type_output_zonal_zones: Resource 'google_container_cluster.zonal_primary' does not have attribute 'additional_zones' for variable 'google_container_cluster.zonal_primary.*.additional_zones'

Why is user forced to specify more than 1 zone? This is supposed to be a generic module after all.

variable "default-scopes" {
  type = "list"

  default = [
    "https://www.googleapis.com/auth/monitoring",
    "https://www.googleapis.com/auth/devstorage.read_only",
    "https://www.googleapis.com/auth/logging.write",
    "https://www.googleapis.com/auth/service.management.readonly",
    "https://www.googleapis.com/auth/servicecontrol",
    "https://www.googleapis.com/auth/trace.append",
  ]
}

module "gke" {
  source                     = "github.com/terraform-google-modules/terraform-google-kubernetes-engine?ref=master"
  ip_range_pods              = ""                 #TODO
  ip_range_services          = ""                 #TODO
  name                       = "cluster-you-name-it"
  network                    = "vpc-you-name-it"
  project_id                 = "project-you-name-it"
  region                     = "europe-west1"
  subnetwork                 = "vpc-sub-you-name-it"
  zones                      = ["europe-west1-c"]
  monitoring_service         = "monitoring.googleapis.com/kubernetes"
  logging_service            = "logging.googleapis.com/kubernetes"
  maintenance_start_time     = "04:00"
  kubernetes_version         = "1.11.3-gke.18"
  horizontal_pod_autoscaling = true
  regional                   = false

  node_pools = [
    {
      name               = "core"
      machine_type       = "n1-standard-2"
      oauth_scopes       = "${var.default-scopes}"
      min_count          = 1
      max_count          = 20
      auto_repair        = true
      auto_upgrade       = false
      initial_node_count = 20
    },
    {
      name               = "cc"
      machine_type       = "custom-6-23040"
      oauth_scopes       = "${var.default-scopes}"
      min_count          = 0
      max_count          = 20
      auto_repair        = true
      auto_upgrade       = false
      initial_node_count = 20
      preemptible        = true
      node_version       = "1.10.9-gke.7"
    },
  ]

  node_pools_labels = {
    all  = {}
    core = {}
    cc   = {}
  }

  node_pools_tags = {
    all  = []
    core = []
    cc   = []
  }

  node_pools_taints = {
    all  = []
    core = []
    cc   = []
  }
}
@morgante
Copy link
Contributor

I don't think it's supported to have a regional cluster with a single zone. For that case, I think you would want to create a zonal cluster, as per this example: https://github.com/terraform-google-modules/terraform-google-kubernetes-engine/tree/master/examples/simple_zonal

@czka
Copy link
Author

czka commented Dec 19, 2018

@morgante Why do you think I want to have a regional cluster? I've set regional = false.

@bzub
Copy link

bzub commented Dec 21, 2018

I'm hitting this issue as well, although it only started happening after a terraform destroy in which the previously created cluster was deleted already outside of terraform (gcloud cli). So there may be some resources left over in my GCP project that get picked up when terraform refreshes. terraform show is empty so I don't think it's a state issue on the terraform side.

@bzub
Copy link

bzub commented Dec 22, 2018

Now I'm seeing this issue on a brand new GCP project (created with project-factory module).

@bzub
Copy link

bzub commented Dec 22, 2018

This interpolation basically looks for a non-existent list element at index 1 when var.zones has a length of only one.

additional_zones = ["${slice(var.zones,1,length(var.zones))}"]

I'm assuming that's part of the issue, since this local variable ends up with no zones instead of the singular zone provided to the module's zone variable.

cluster_type_output_zonal_zones = "${concat(google_container_cluster.zonal_primary.*.additional_zones, list(list()))}"

@morgante
Copy link
Contributor

Thanks for identifying the issue - that does indeed look like what needs to be fixed.

@ogreface
Copy link

ogreface commented Jan 2, 2019

Potential fix in: #50

@czka
Copy link
Author

czka commented Jan 14, 2019

#50 works for me.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants