Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[ISSUE] databricks_permissions shows permanent drift if the owner is not the same as the TF identifier #3730

Closed
nkvuong opened this issue Jul 2, 2024 · 4 comments · Fixed by #3829

Comments

@nkvuong
Copy link
Contributor

nkvuong commented Jul 2, 2024

Configuration

resource "databricks_permissions" "sql_warehouse" {
    access_control {
        group_name = "users"
        permission_level = "CAN_MONITOR"
    }
    sql_endpoint_id = "fba2ef8e11b2c0a3" # this endpoint is owned by a different principal
}

Expected Behavior

tf plan should be clean

Actual Behavior

tf plan shows planned changes

  ~ resource "databricks_permissions" "sql_warehouse" {
        id              = "/sql/warehouses/xxx"
        # (2 unchanged attributes hidden)

      - access_control {
          - permission_level       = "IS_OWNER" -> null
          - user_name              = "xxx.xxx@databricks.com" -> null
            # (2 unchanged attributes hidden)
        }
      - access_control {
          - group_name             = "users" -> null
          - permission_level       = "CAN_MONITOR" -> null
            # (2 unchanged attributes hidden)
        }
      + access_control {
          + group_name       = "users"
          + permission_level = "CAN_MONITOR"
        }
    }

Terraform and provider versions

1.48.2

Important Factoids

Running terraform apply does not clear the diff

@NiklasA
Copy link

NiklasA commented Jul 10, 2024

I am encountering an simliar issue, where terraform apply shows changes every time it is run, even though no actual code/config modifications are being made.

Code:

variable "data_products" {
  description = "List of all data products with their respective attributes."
  type = list(object({
    id                = string
    repo_url          = string
    group_name_prefix = string
  }))
}

resource "databricks_permissions" "data_products_general_shared_autoscaling" {
  for_each = {
    for product in var.data_products : product.id => product
  }

  cluster_id = databricks_cluster.general_shared_autoscaling.id

  access_control {
    group_name       = "${each.value.group_name_prefix}_MANAGE"
    permission_level = "CAN_RESTART"
  }

  access_control {
    group_name       = "${each.value.group_name_prefix}_EDIT"
    permission_level = "CAN_RESTART"
  }

  access_control {
    group_name       = "${each.value.group_name_prefix}_RUN"
    permission_level = "CAN_RESTART"
  }

  access_control {
    group_name       = "${each.value.group_name_prefix}_MANAGE"
    permission_level = "CAN_ATTACH_TO"
  }

  access_control {
    group_name       = "${each.value.group_name_prefix}_EDIT"
    permission_level = "CAN_ATTACH_TO"
  }

  access_control {
    group_name       = "${each.value.group_name_prefix}_RUN"
    permission_level = "CAN_ATTACH_TO"
  }
}

Terminal

Plan: 0 to add, 5 to change, 0 to destroy.
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.

@patrickwilliamconway
Copy link

I'm also seeing this state drift occur. I am working on migrating basic-auth --> oauth-m2m. All existing resources are owned by the root admin user, and I'm trying to now manage via a Service Principal.

@patrickwilliamconway
Copy link

hey @nkvuong, do you have any insight into fixing this? I tried going back a few versions but still was having this issue. The trouble is that I have existing resources that I can't easily destroy/recreate so I can't go back that far. I'm currently I'm just using a lifecycle to ignore the diffs. Not ideal, but 🤷

resource "databricks_permissions" "endpoint_usage" {
  sql_endpoint_id = databricks_sql_endpoint.endpoint.id

  access_control {
    group_name       = var.company_group_name
    permission_level = "CAN_USE"
  }

  lifecycle {
    # https://github.com/databricks/terraform-provider-databricks/issues/3730
    ignore_changes = [
      access_control
    ]
  }
}

Also, any chance underlying issue is related to #2543? I'm using databricks_permissions to manage a sql_warehouse, not a cluster, but I assuming they're somewhat related.

@alexott
Copy link
Contributor

alexott commented Jul 18, 2024

no, it's not related to #2543 - warehouses have their own permissions

github-merge-queue bot pushed a commit that referenced this issue Aug 9, 2024
…not specified (#3829)

## Changes
- SQL warehouses supports specifying `IS_OWNER` permission and therefore
requires the same workaround as jobs & pipelines.
- Resolves #3730

## Tests
<!-- 
How is this tested? Please see the checklist below and also describe any
other relevant tests
-->

- [x] `make test` run locally
- [x] relevant change in `docs/` folder
- [x] covered with integration tests in `internal/acceptance`
- [x] relevant acceptance tests are passing
- [x] using Go SDK
hshahconsulting pushed a commit to hshahconsulting/terraform-provider-databricks that referenced this issue Aug 13, 2024
…not specified (databricks#3829)

## Changes
- SQL warehouses supports specifying `IS_OWNER` permission and therefore
requires the same workaround as jobs & pipelines.
- Resolves databricks#3730

## Tests
<!-- 
How is this tested? Please see the checklist below and also describe any
other relevant tests
-->

- [x] `make test` run locally
- [x] relevant change in `docs/` folder
- [x] covered with integration tests in `internal/acceptance`
- [x] relevant acceptance tests are passing
- [x] using Go SDK
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
4 participants