Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to set local_ssd_count to 0 on cluster configuration #1258

Closed
ne250040 opened this issue Mar 6, 2024 · 4 comments · Fixed by #1418
Closed

Unable to set local_ssd_count to 0 on cluster configuration #1258

ne250040 opened this issue Mar 6, 2024 · 4 comments · Fixed by #1418
Assignees
Labels
DABs DABs related issues

Comments

@ne250040
Copy link

ne250040 commented Mar 6, 2024

Describe the issue

With the UI, I updated the cluster configuration to set the Number of Local SSDs as zero, the cluster seems runs normal. However, setting this value to 0 via Assets Bundle will revert the value to Default.

Configuration

With the below configuration, I see the cluster information on the job, where the Local SSD is set to 1 instead of 0

resources:
  jobs:
    jb_ingestion_some_data:
      name: jb_ingestion_some_data
      job_clusters:
        - job_cluster_key: some-cluster
          new_cluster:
            spark_version: 13.3.x-scala2.12
            num_workers: 2
            data_security_mode: "SINGLE_USER"
            node_type_id: n2-highmem-4
            gcp_attributes:
              google_service_account: ${var.gcp_cloud_service_account}
              local_ssd_count: 0
      tasks:
        - task_key: load_some_data
          job_cluster_key: some-cluster
          notebook_task:
            notebook_path: "${var.notebook_paths}/ingestion/load_fd_some_data.py"

Steps to reproduce the behavior

  1. Use above configuration on the Assets bundle configuration
  2. Deploy this bundle to a workspace
  3. Check the cluster configuration on job details.
  4. On the Advanced Options, I see the "#Local SSDs" as 1 instead of 0

Expected Behavior

When i setup local_ssd_count as 0 on the configuration, we expect the same.

Actual Behavior

Local SSD on the cluster configuration is 1 instead of 0.

OS and CLI version

CLI: Databricks CLI v0.214.0
OS: MacOS

Is this a regression?

This is a new setup, i'm not aware of previous versions.

Debug Logs

output_logs_sample.log

@ne250040 ne250040 added the DABs DABs related issues label Mar 6, 2024
@andrewnester andrewnester self-assigned this Apr 15, 2024
@andrewnester
Copy link
Contributor

It appears to be an issue on terraform provider side, DABs correctly generates the state with local_ssd_count set to zero

"job_cluster": [
          {
            "job_cluster_key": "some-cluster",
            "new_cluster": {
              "data_security_mode": "SINGLE_USER",
              "gcp_attributes": {
                "local_ssd_count": 0
              },
              "node_type_id": "n2-highmem-4",
              "num_workers": 2,
              "spark_version": "13.3.x-scala2.12"
            }
          }

cc @mgyucht

@ne250040
Copy link
Author

Thanks @andrewnester for this idea to check on the tfstate files.

I now tried several times between 0 and non-zero values.
On the bundle.tf.json, the values do change between 0 and non-zero values. But on the terraform.tfstate, once the value becomes non-zero, it cannot go back to zero. This could be indicating an issue that Terraform thinks the value should not go back to 0 for the #Local SSD.

@andrewnester
Copy link
Contributor

@ne250040 yes, that's correct. We have to fix it on TF provider side first and upgrade it in CLI

@andrewnester
Copy link
Contributor

For visibility, here's required TF provider PR: databricks/terraform-provider-databricks#3385

github-merge-queue bot pushed a commit that referenced this issue May 6, 2024
## Changes
Upgrade TF provider to 1.42.0

Also fixes #1258
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
DABs DABs related issues
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants