Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[ISSUE] Replacement of databricks_instance_pool causes error in update of cluster #824

Closed
Freia3 opened this issue Sep 9, 2021 · 0 comments · Fixed by #850
Closed

[ISSUE] Replacement of databricks_instance_pool causes error in update of cluster #824

Freia3 opened this issue Sep 9, 2021 · 0 comments · Fixed by #850
Milestone

Comments

@Freia3
Copy link

Freia3 commented Sep 9, 2021

We want to update the databricks runtime version to "9.0.x-scala2.12", so the databricks_instance_pool has to be replaced. Replacing the databricks_instance_pool causes an error in terraform apply.

Configuration

resource "databricks_instance_pool" "pool" {
  instance_pool_name                    = "pool"
  min_idle_instances                    = 0
  node_type_id                          = var.databricks_node_type
  preloaded_spark_versions              = [var.databricks_spark_version]
  idle_instance_autotermination_minutes = 15
  enable_elastic_disk                   = true
  azure_attributes {
    availability       = "ON_DEMAND_AZURE"
    spot_bid_max_price = 0
  }

}
resource "databricks_cluster" "terraform" {
  cluster_name            = "terraform-only"
  spark_version           = var.databricks_spark_version
  instance_pool_id        = databricks_instance_pool.pool.id
  autotermination_minutes = 15
  is_pinned               = true
  num_workers             = 0
  spark_conf = {
    "spark.databricks.cluster.profile" : "singleNode",
    "spark.master" : "local[*]"
  }
  custom_tags = {
    ResourceClass = "SingleNode"
  }
}
variable "databricks_spark_version" {
  default     = "9.0.x-scala2.12"
  description = "Spark version to be used inside Databricks"
  type        = string
}

Expected Behavior

databricks_cluster_terraform should have been updated in-place, so that it uses the new pool.
terraform plan:

 # databricks_cluster.terraform will be updated in-place
  ~ resource "databricks_cluster" "terraform" {
        id                           = "123456"
      ~ instance_pool_id             = "old_id" -> (known after apply)
      ~ spark_version                = "8.1.x-scala2.12" -> "9.0.x-scala2.12"
        # (15 unchanged attributes hidden)
    }
  # databricks_instance_pool.pool must be replaced
-/+ resource "databricks_instance_pool" "pool" {
      ~ id                                    = "old_id" -> (known after apply)
      ~ instance_pool_id                      = "old_id" -> (known after apply)
      - max_capacity                          = 0 -> null
      ~ preloaded_spark_versions              = [ # forces replacement
          - "8.1.x-scala2.12",
          + "9.0.x-scala2.12",
        ]
        # (5 unchanged attributes hidden)

        # (1 unchanged block hidden)

Actual Behavior

│ Error: Can't find an instance pool with id: old_id.

│ with databricks_cluster.terraform,
│ on databricks_clusters.tf line 1, in resource "databricks_cluster" "terraform":
│ 1: resource "databricks_cluster" "terraform" {

Steps to Reproduce

  1. update the databricks runtime version of databricks_instance_pool so the databricks_instance_pool is replaced
  2. terraform apply

Terraform and provider versions

Terraform v1.0.4
provider registry.terraform.io/databrickslabs/databricks v0.3.7

Fix

When I add the driver_instance_pool argument to databricks_cluster resource, it resolves the issue! This is not expected behaviour.

resource "databricks_cluster" "terraform" {
  cluster_name            = "terraform-only"
  spark_version           = var.databricks_spark_version
  instance_pool_id        = databricks_instance_pool.pool.id

  driver_instance_pool_id = databricks_instance_pool.pool.id

  autotermination_minutes = 15
  is_pinned               = true
  num_workers             = 0
  spark_conf = {
    "spark.databricks.cluster.profile" : "singleNode",
    "spark.master" : "local[*]"
  }
  custom_tags = {
    ResourceClass = "SingleNode"
  }
}
@nfx nfx added this to the v0.3.8 milestone Oct 6, 2021
nfx added a commit that referenced this issue Oct 6, 2021
Added corner case to fix issue #824 where `driver_instance_pool_id` was not explicitly specified and old driver instance pool sent in cluster update request
@nfx nfx linked a pull request Oct 6, 2021 that will close this issue
@nfx nfx closed this as completed in #850 Oct 6, 2021
nfx added a commit that referenced this issue Oct 6, 2021
Added corner case to fix issue #824 where `driver_instance_pool_id` was not explicitly specified and old driver instance pool sent in cluster update request
@nfx nfx mentioned this issue Oct 7, 2021
michael-berk pushed a commit to michael-berk/terraform-provider-databricks that referenced this issue Feb 15, 2023
…bricks#850)

Added corner case to fix issue databricks#824 where `driver_instance_pool_id` was not explicitly specified and old driver instance pool sent in cluster update request
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants