Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

creating google_container_cluster with resource_usage_export_config fails #6814

Closed
Assignees

Comments

@rohitvyavahare
Copy link

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request.
  • Please do not leave +1 or me too comments, they generate extra noise for issue followers and do not help prioritize the request.
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment.
  • If an issue is assigned to the modular-magician user, it is either in the process of being autogenerated, or is planned to be autogenerated soon. If an issue is assigned to a user, that user is claiming responsibility for the issue. If an issue is assigned to hashibot, a community member has claimed the issue already.

Terraform Version

Terraform v0.13.0-beta3

Affected Resource(s)

  • google_container_cluster

Terraform Configuration Files

# Copy-paste your Terraform configurations here.
#
# For large Terraform configs, please use a service like Dropbox and share a link to the ZIP file.
# For security, you can also encrypt the files using our GPG public key:
#    https://www.hashicorp.com/security
#
# If reproducing the bug involves modifying the config file (e.g., apply a config,
# change a value, apply the config again, see the bug), then please include both:
# * the version of the config before the change, and
# * the version of the config after the change.

resource "google_container_cluster" "regional_cluster" {
  provider = google-beta

  # Required
  name = var.cluster_name

  # Optional
  location                    = var.location
  node_locations              = var.node_locations
  description                 = var.description
  enable_kubernetes_alpha     = var.enable_kubernetes_alpha
  initial_node_count          = var.initial_node_count
  logging_service             = var.logging_service
  min_master_version          = var.min_master_version
  monitoring_service          = var.monitoring_service
  network                     = lookup(var.network_config, "name")
  project                     = var.project_id
  remove_default_node_pool    = true
  resource_labels             = var.resource_labels
  subnetwork                  = lookup(var.network_config, "subnet_name")
  enable_binary_authorization = var.enable_binary_authorization
  enable_tpu                  = var.enable_tpu
  enable_legacy_abac          = var.enable_legacy_abac
  enable_shielded_nodes       = var.enable_shielded_nodes

  master_authorized_networks_config {
    dynamic "cidr_blocks" {
      for_each = var.master_authorized_networks_cidr_blocks
      content {
        cidr_block   = cidr_blocks.value.cidr_block
        display_name = cidr_blocks.value.display_name
      }
    }
  }

  addons_config {
    horizontal_pod_autoscaling {
      disabled = var.disable_horizontal_pod_autoscaling
    }

    http_load_balancing {
      disabled = var.disable_http_load_balancing
    }

    network_policy_config {
      disabled = var.disable_network_policy_config
    }

    cloudrun_config {
      disabled = var.disable_cloudrun_config
    }

    dns_cache_config {
      enabled = var.enable_dns_cache_config
    }

    kalm_config {
      enabled = var.enable_kalm_config
    }

    config_connector_config {
      enabled = var.enable_config_connector_config
    }

    dynamic "istio_config" {
      for_each = var.istio_config
      content {
        disabled = lookup(var.istio_config, disabled)
        auth     = lookup(var.istio_config, auth)
      }
    }

    gce_persistent_disk_csi_driver_config {
      enabled = var.enable_gce_persistent_disk_csi_driver
    }
  }

  ip_allocation_policy {
    # pods IP address range name
    cluster_secondary_range_name = "pods"

    # services IP address range name
    services_secondary_range_name = "services"
  }

  pod_security_policy_config {
    enabled = var.pod_security_policy_config_enabled
  }

  master_auth {
    username = var.master_auth_username
    password = var.master_auth_password

    client_certificate_config {
      issue_client_certificate = var.issue_client_certificate
    }
  }

  private_cluster_config {
    master_ipv4_cidr_block  = var.private_cluster ? var.master_ipv4_cidr_block : ""
    enable_private_nodes    = var.enable_private_nodes
    enable_private_endpoint = var.enable_private_endpoint
  }

  dynamic "resource_usage_export_config" {
    for_each = var.resource_usage_export_config != null ? toset(["resource_usage_export_config"]) : toset([])
    content {
      enable_network_egress_metering = lookup(var.resource_usage_export_config, "enable_network_egress_metering")
      enable_resource_consumption_metering = lookup(var.resource_usage_export_config, "enable_resource_consumption_metering")
      bigquery_destination {
        dataset_id =  google_bigquery_dataset.cluster_dataset.dataset_id
      }
    }
  }

  dynamic "database_encryption" {
    for_each = var.local.kms_crypto_config.name
    content {
      state    = "ENCRYPTED"
      key_name = google_kms_crypto_key.encrypt_decrypt_key.id
    }
  }

 }

Debug Output

Panic Output

https://gist.github.com/rohitvyavahare/c060deba2059de0105d92bd694a51499

Expected Behavior

Should create container cluster with resource_usage_export_config

Actual Behavior

Error: rpc error: code = Unavailable desc = transport is closing

Steps to Reproduce

Create container cluster with resource_usage_export_config

If we create a cluster without resource_usage_export_config , then it creates cluster and then if we add resource_usage_export_config then it successfully applies.

  1. terraform apply

Important Factoids

References

  • #0000
@ghost ghost added the bug label Jul 21, 2020
@edwardmedia edwardmedia self-assigned this Jul 21, 2020
@edwardmedia
Copy link
Contributor

edwardmedia commented Jul 21, 2020

@rohitvyavahare I've just created a cluster using below code and it seems fine with me. Can you share your full debug log?

resource "google_container_cluster" "primary" {
provider = google-beta
  name               = "issue6814-cluster"
  location           = "us-central1-a"
  initial_node_count = 3
  master_auth {
    username = ""
    password = ""
    client_certificate_config {
      issue_client_certificate = false
    }
  }
  node_config {
    oauth_scopes = [
      "https://www.googleapis.com/auth/logging.write",
      "https://www.googleapis.com/auth/monitoring",
    ]
    metadata = {
      disable-legacy-endpoints = "true"
    }
    labels = {
      foo = "bar"
    }
    tags = ["foo", "bar"]
  }
  timeouts {
    create = "30m"
    update = "40m"
  }
  resource_usage_export_config {
    enable_network_egress_metering = false
    enable_resource_consumption_metering = true
    bigquery_destination {
      dataset_id = "cluster_resource_usage"
   }
  }
}

@rohitvyavahare
Copy link
Author

@edwardmedia thank you for reply

I tested again and found that if dataset_id is empty or "" in following config
it throws that error

+ resource_usage_export_config {
          + enable_network_egress_metering       = true
          + enable_resource_consumption_metering = true

          + bigquery_destination {}
        }

@ghost ghost removed the waiting-response label Jul 22, 2020
@edwardmedia
Copy link
Contributor

@rohitvyavahare yes, by set dataset_id = "", it does cause crash. The document indicates bigquery_destination.dataset_id (Required) I think we should update the field validation.

@ghost
Copy link

ghost commented Aug 22, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 hashibot-feedback@hashicorp.com. Thanks!

@hashicorp hashicorp locked and limited conversation to collaborators Aug 22, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.