Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add databricks_connection resource to support Lakehouse Federation #2528

Merged
merged 24 commits into from
Aug 22, 2023

Conversation

nkvuong
Copy link
Contributor

@nkvuong nkvuong commented Jul 28, 2023

Changes

Add support for Lakehouse Federation

  • Add a new resource databricks_connection that represent Unity Catalog connections
  • Extend databricks_catalog to support foreign catalogs

Closes #2575

Tests

  • make test run locally
  • relevant change in docs/ folder
  • covered with integration tests in internal/acceptance
  • relevant acceptance tests are passing
  • using Go SDK

@nkvuong nkvuong requested review from a team as code owners July 28, 2023 17:10
Copy link
Contributor

@alexott alexott left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looks good, minor comments that we can address later

Comment on lines +36 to +38
func(m map[string]*schema.Schema) map[string]*schema.Schema {
return m
})
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that we have multiple places like this, so it makes sense to move this to the common package

docs/resources/connection.md Outdated Show resolved Hide resolved
OptionsKvpairs: map[string]string{
"host": "test.com",
},
PropertiesKvpairs: map[string]string{
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This needs to be a list of options:

"options":[
  {"key":"host","value":"mysqk.fakedb.com"},
  {"key":"port","value":"1234"},
  {"key":"user","value":"user123"},
  {"key":"password","value":"password123"}
]

Copy link
Contributor

@andrewli81 andrewli81 Jul 29, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right now it is not a list:

"options_kvpairs": {
   "host": "test.com"
}

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it really expected that it will be repeated keys? Other APIs are accepting objects as well, in addition to key/value pairs...

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe the api docs are wrong, this is something we need to correct. There's no difference between the connection api code and other apis.

Copy link
Contributor

@andrewli81 andrewli81 Aug 2, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

cc @Doug-Luce - I am not sure if the open api spec actually works - the spec mentions the options in a KV pair way:

"options_kvpairs": {
   "host": "test.com",
   "port": "1234",
   "user": "user123",
   "password": "password123"
}

but I always know this works:

"options":[
  {"key":"host","value":"mysqk.fakedb.com"},
  {"key":"port","value":"1234"},
  {"key":"user","value":"user123"},
  {"key":"password","value":"password123"}
]

It could be possible that we have an implementation deficiency of the connection API that caused us to not be able to take the kv style of input, but I think most likely it's a bug in our API doc, which we should fix.
Screenshot 2023-08-01 at 11 29 41 PM

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I queried the REST API and saw the following response:

<     {
<       "connection_id": "<REDACTED>",
<       "connection_type": "POSTGRESQL",
<       "created_at": 1681945142627,
<       "created_by": "<REDACTED>",
<       "credential_type": "USERNAME_PASSWORD",
<       "full_name": "<REDACTED>",
<       "metastore_id": "<REDACTED>",
<       "name": "<REDACTED>",
<       "options": {
<         "host": "<REDACTED>,
<         "port": "5432"
<       },
<       "owner": "<REDACTED>",
<       "provisioning_info": {
<         "state": "ACTIVE"
<       },
<       "read_only": true,
<       "securable_kind": "CONNECTION_POSTGRESQL",
<       "securable_type": "CONNECTION",
<       "updated_at": 1681945142627,
<       "updated_by": "<REDACTED>",
<       "url": "jdbc://<REDACTED>:5432/"
<     }

Options is named options and is a dictionary of strings. This response had no properties, but that field is modeled identically internally.

I'll file a ticket to correct the OpenAPI specification. In the meantime, it seems like we should model this as a map[string]string.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm updating the OpenAPI spec to fix the naming of options as well as to reflect that properties_kvpairs is not a part of the response. I'll also add a couple of missing fields.

For creating the connection, options are specified as map[string][string].
This is valid:

"options_kvpairs": {
   "host": "test.com",
   "port": "1234",
   "user": "user123",
   "password": "password123"
}

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it should still be named options, as that is the name of the nested field inside after inlining?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, it would be options and not options_kvpairs. Copy-pasted from Andrew's post, but this is the body I used to create the connection.

{
    "name": "openapi_connection_spec_test_2",
    "connection_type": "MYSQL",
    "options": {
        "host": "test.com",
        "port": "5432",
        "user": "dougluce",
        "password": "redacted"
    },
    "properties": {
        "some_prop": "test1",
        "some_prop2": "test2"
    }
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor

@andrewli81 andrewli81 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thanks so much for adding this, one comment on why the integration test is failing.

This is an example good json input:

curl -vvv -H "Authorization: Bearer xxx" -H "Content-Type: application/json" -X POST https://dbc-01eafff1-4ea3.dev.databricks.com/api/2.0/unity-catalog/connections -d '{"name":"test_mysql","connection_type":"MYSQL","options":[{"key":"host","value":"mysqk.fakedb.com"},{"key":"port","value":"1234"},{"key":"user","value":"user123"},{"key":"password","value":"password123"}]}'

Copy link
Contributor

@mgyucht mgyucht left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

First batch of comments. Thanks for contributing this!

"github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
)

// This structure contains the fields of catalog.UpdateConnection and catalog.CreateConnection
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We need better utilities for converting Go SDK structs to TF schemas so we don't need to inline stuff like this.

var alias ConnectionInfo
common.DataToStructPointer(d, s, &createConnectionRequest)
common.DataToStructPointer(d, s, &alias)
//workaround as cannot set tf:"alias" for the Go SDK struct
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

micronit: Can you move this comment up one line?

if err != nil {
return err
}
d.SetId(conn.Name)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are connection names globally unique?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

connection names must be unique within a metastore

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What happens if a user defines two connections with the same name in two different metastores? Should the name be combined with the metastore ID to make a universally unique identifier?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah if you want global uniqueness we should add metastore id into the scope.

OptionsKvpairs: map[string]string{
"host": "test.com",
},
PropertiesKvpairs: map[string]string{
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please fix the OpenAPI spec, as we (and customers!) depend heavily on its accuracy to be able to use your team's API.

BTW: do we really want this to be a repeated list of KV pairs? We should be critical about these interfaces and think from the customer's perspective, aiming to do what is natural and intuitive where possible, rather than force all customers to adapt to an unusual pattern because there is some small technical issue. In the worst case, you should be able to define your proto using the Any type and deserialize yourself (scalapb docs here: https://scalapb.github.io/api/com/google/protobuf/any/Any.html).

go.sum Outdated
Comment on lines 54 to 55
github.com/databricks/databricks-sdk-go v0.14.1 h1:s9x18c2i6XbJxem6zKdTrrwEUXQX/Nzn0iVM+qGlRus=
github.com/databricks/databricks-sdk-go v0.14.1/go.mod h1:0iuEtPIoD6oqw7OuFbPskhlEryt2FPH+Ies1UYiiDy8=
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remove this

marekbrysa and others added 12 commits August 2, 2023 10:52
… when `instance_pool_id` is specified (#2507)

NodeTypeID cannot be set in jobsAPI.Update() if InstancePoolID is specified.
If both are specified, assume InstancePoolID takes precedence and NodeTypeID is only computed.

Closes #2502.
Closes #2141.
…job` (#2444)

This allows to force full refresh of the pipeline from the job.

This fixes #2362
…2523)

Bumps [github.com/databricks/databricks-sdk-go](https://github.com/databricks/databricks-sdk-go) from 0.13.0 to 0.14.1.
- [Release notes](https://github.com/databricks/databricks-sdk-go/releases)
- [Changelog](https://github.com/databricks/databricks-sdk-go/blob/main/CHANGELOG.md)
- [Commits](databricks/databricks-sdk-go@v0.13.0...v0.14.1)

---
updated-dependencies:
- dependency-name: github.com/databricks/databricks-sdk-go
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Miles Yucht <miles@databricks.com>
* Update docs to include USE_MARKETPLACE_ASSETS privilege

* Add USE_MARKETPLACE_ASSETS to metastore privileges
* Add git job_source to job resource

* lint

* fix test

* Use go sdk type
…a source (#2458)

* Allow search SQL Warehouses by name in `databricks_sql_warehouse` data source

Right now it's possible to search only by the warehouse ID, but it's not always convenient
although it's possible by using `databricks_sql_warehouses` data source + explicit
filtering.  This PR adds a capability to search by either SQL warehouse name or ID.

This fixes #2443

* Update docs/data-sources/sql_warehouse.md

Co-authored-by: Miles Yucht <miles@databricks.com>

* Address review comments

also change documentation a bit to better match the data source - it was copied from the
resource as-is.

* More fixes from review

* code review comments

---------

Co-authored-by: Miles Yucht <miles@databricks.com>
#2496)

* Late jobs support (aka health conditions) in `databricks_job` resource

Added support for `health` block that is used to detect late jobs.  Also, this PR includes
following changes:

* Added `on_duration_warning_threshold_exceeded` attribute to email & webhook notifications (needed for late jobs support)
* Added `notification_settings` on a task level & use jobs & task notification structs from Go SDK
* Reorganized documentation for task block as it's getting more & more attributes

* Update docs/resources/job.md

Co-authored-by: Gabor Ratky <gabor.ratky@databricks.com>

* Update docs/resources/job.md

Co-authored-by: Gabor Ratky <gabor.ratky@databricks.com>

* Update docs/resources/job.md

Co-authored-by: Gabor Ratky <gabor.ratky@databricks.com>

* Update docs/resources/job.md

Co-authored-by: Gabor Ratky <gabor.ratky@databricks.com>

* Update docs/resources/job.md

Co-authored-by: Gabor Ratky <gabor.ratky@databricks.com>

* Update docs/resources/job.md

Co-authored-by: Gabor Ratky <gabor.ratky@databricks.com>

* Update docs/resources/job.md

Co-authored-by: Gabor Ratky <gabor.ratky@databricks.com>

* Update docs/resources/job.md

Co-authored-by: Gabor Ratky <gabor.ratky@databricks.com>

* Update docs/resources/job.md

Co-authored-by: Gabor Ratky <gabor.ratky@databricks.com>

* Update docs/resources/job.md

Co-authored-by: Gabor Ratky <gabor.ratky@databricks.com>

* address review comments

* add list of tasks

* more review chanes

---------

Co-authored-by: Gabor Ratky <gabor.ratky@databricks.com>
Co-authored-by: Miles Yucht <miles@databricks.com>
@nkvuong nkvuong requested a review from a team as a code owner August 14, 2023 11:19
Copy link
Contributor

@mgyucht mgyucht left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Couple questions, otherwise seems mostly alright. Main question is what is the form of the ID of a connection in TF state. If it is not globally unique, we may want to combine it with the metastore's ID to get a globally unique ID. Thoughts?

Response: catalog.ConnectionInfo{
Name: "testConnectionNameNew",
ConnectionType: catalog.ConnectionType("testConnectionType"),
Comment: "testComment",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For consistency, should the Options be included here and in the next response?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

},
{
Method: http.MethodGet,
Resource: "/api/2.1/unity-catalog/connections/testConnectionName?",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is there a trailing ??

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

our SDK generates GET method with a trailing ? for parameters

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, sometimes I see both with ? and without ? - I have few examples in the exporter

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Weird... I think this is OK but probably not intentional...

// suppress diff for sensitive options, which are not returned by the server
func suppressSensitiveOptions(k, old, new string, d *schema.ResourceData) bool {
//this list will expand as other auth may have different sensitive options
sensitiveOptions := []string{"user", "password"}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it possible to update the user and password fields? Maybe if it were possible to see if the values for these fields in the local state is different than in the configuration, but I don't think that is possible in Terraform.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All options are updatable.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it is possible to update those options in the API, but Terraform does not allow easy updating comparing.

maybe we need a force_update option that would always send the update payload regardless of diff

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, but the question is how to determine whether user/password options are different locally than what is configured (imagine someone changed user/password manually in Databricks or via a separate process). How can we compute whether or not the resource needs to be updated? Typically, TF reads resources to compare them to its state & the configuration supplied by the user to compute a diff. If user and password are not returned, it may not be possible to know whether the connection needs to be updated.

Copy link
Contributor

@andrewli81 andrewli81 Aug 15, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it possible to default to always updating the username and password? There are basically 2 worlds:

  1. Silently fail to update when user changes username/password
  2. Always update username / password.
    I feel option 1 might be confusing to people.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Given that we have the choice between forcing users to make manual changes if secret fields are updated vs updating them every time, I agree that we should lean towards always refreshing. In light of moving forward with this resource and unblocking customers, let's do this. @nkvuong what do you think, would this be a viable option from the Terraform perspective? IIUC to do this, we treat the secret options in the same way as non-secret options, and they will be included in the diff every time.

@andrewli81 I really recommend that your team revisits the secret management story for UC connections. We don't need each team to reinvent the wheel when it comes to secret management, and when this happens, it increases complexity for consumers and results in worse user experiences. For inspiration, you can look at how secrets are referenced in Spark configurations. I understand that there is a workspace-level/account-level divide that will need to be addressed. There may be other use-cases for account-level secrets in the future as well (or even today: consider account-level service principal passwords).

Copy link
Contributor Author

@nkvuong nkvuong Aug 16, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we treat the secret options in the same way as non-secret options, and they will be included in the diff every time.

@alexott gave a good suggestion on how to do it, this was done for webhook
I've made the change which now works without the need for DiffSuppressFunc

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we have similar problem with MLflow webhooks, and on read we simply restore data from the state: https://github.com/databricks/terraform-provider-databricks/blob/master/mlflow/resource_mlflow_webhook.go#L79 - we can try to do the same here

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I understand that there is a workspace-level/account-level divide that will need to be addressed. There may be other use-cases for account-level secrets in the future as well (or even today: consider account-level service principal passwords).

We have to be unblocked on shipping features that need to store secrets in UC account level objects in lieu of a team that is willing to fund an account level secret manager, I will probably write a vision doc for this because I think we will need it for other use cases as well.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Completely with you. That is why I said "we should lean towards always refreshing. In light of moving forward with this resource and unblocking customers, let's do this."

if err != nil {
return err
}
d.SetId(conn.Name)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What happens if a user defines two connections with the same name in two different metastores? Should the name be combined with the metastore ID to make a universally unique identifier?

@codecov-commenter
Copy link

codecov-commenter commented Aug 15, 2023

Codecov Report

Merging #2528 (74acee9) into master (45c4d80) will decrease coverage by 0.27%.
Report is 9 commits behind head on master.
The diff coverage is 73.33%.

Additional details and impacted files

Impacted file tree graph

@@            Coverage Diff             @@
##           master    #2528      +/-   ##
==========================================
- Coverage   88.09%   87.82%   -0.27%     
==========================================
  Files         145      146       +1     
  Lines       12108    12234     +126     
==========================================
+ Hits        10667    10745      +78     
- Misses        944      968      +24     
- Partials      497      521      +24     
Files Changed Coverage Δ
catalog/resource_catalog.go 71.76% <ø> (ø)
catalog/resource_connection.go 72.88% <72.88%> (ø)
provider/provider.go 94.11% <100.00%> (+0.03%) ⬆️

... and 9 files with indirect coverage changes

@nkvuong
Copy link
Contributor Author

nkvuong commented Aug 15, 2023

What happens if a user defines two connections with the same name in two different metastores? Should the name be combined with the metastore ID to make a universally unique identifier?

@mgyucht this is actually a good point, and we may need to retroactively do this for all UC resources 😨

ProviderName string `json:"provider_name,omitempty" tf:"force_new,conflicts:storage_root"`
ShareName string `json:"share_name,omitempty" tf:"force_new,conflicts:storage_root"`
ConnectionName string `json:"connection_name,omitempty" tf:"force_new,conflicts:storage_root"`
Properties map[string]string `json:"properties,omitempty"`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

CatalogInfo also has options - our open api spec needs updating.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@nkvuong Can you address this one as well? Thanks!

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@andrewli81 how would the options be split between connections & catalogs? and would there be sensitive options to be handled here as well?
once this is fixed in OpenAPI, we just need to regenerate the SDK, so should not take long

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@nkvuong

@andrewli81 how would the options be split between connections & catalogs? and would there be sensitive options to be handled here as well?

The options of catalogs are mainly used to identify databases. This is because database systems like postgresql uses 3 layer namespaces (same as UC), so customer will need to specify which database to import into a UC catalog.

https://docs.databricks.com/en/query-federation/postgresql.html#language-SQL:~:text=Run%20the%20following%20SQL%20command%20in%20a%20notebook%20or%20Databricks%20SQL%20editor.%20Items%20in%20brackets%20are%20optional.%20Replace%20the%20placeholder%20values%3A

The open API spec has been fixed: https://docs.databricks.com/api/workspace/connections/create

Can we also fix it here?

// suppress diff for sensitive options, which are not returned by the server
func suppressSensitiveOptions(k, old, new string, d *schema.ResourceData) bool {
//this list will expand as other auth may have different sensitive options
sensitiveOptions := []string{"user", "password"}
Copy link
Contributor

@andrewli81 andrewli81 Aug 15, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The source of truth for the list of secret options can be retrieved by the type manifest API:
https://temp.corp.databricks.com/?114480149a090e3e#QR0tqKyrycfLQ0OH3ZEqsyZOuWEwEtNMb+o7S4bdpXs=

Can we also add personalAccessToken, access_token, client_secret, and OAuthPvtKey

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

Copy link
Contributor

@andrewli81 andrewli81 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Three comments before merging:

  1. List of secret options will need to expand a bit - you can always get the list of sensitive options from the type-manifest API, I posted an example: https://temp.corp.databricks.com/?114480149a090e3e#QR0tqKyrycfLQ0OH3ZEqsyZOuWEwEtNMb+o7S4bdpXs=

I will also point future folks who add new connectors to update this list in TF.

  1. Can we make the "secret" options always update if specified by user? Silently failing to update might be surprising from user perspective.

  2. CreateCatalog will need options too other than connection name for foreign catalogs to work, our API spec is a bit outdated, we will update that as well.

@nkvuong
Copy link
Contributor Author

nkvuong commented Aug 18, 2023

@mgyucht all comments should be now addressed, just need to wait for the OpenAPI spec for Catalog to be updated, i.e. adding options parameter to databricks_catalog

@nkvuong nkvuong changed the title Add support for Unity Catalog connections & foreign catalog Add databricks_connection resource to support Lakehouse Federation Aug 21, 2023
Copy link
Contributor

@mgyucht mgyucht left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One small question, but the implementation looks good to me.

func(m map[string]*schema.Schema) map[string]*schema.Schema {
return m
})
pi := common.NewPairID("name", "metastore_id").Schema(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just to confirm: do we have any consistency around which field comes first in state? I see some places where metastore_id is first and some places where metastore_id is second. Maybe we should try to be somewhat consistent for these packed IDs.

@mgyucht mgyucht enabled auto-merge August 22, 2023 14:14
@mgyucht mgyucht added this pull request to the merge queue Aug 22, 2023
Merged via the queue into master with commit 2a7182a Aug 22, 2023
4 checks passed
@mgyucht mgyucht deleted the feature/uc-connection branch August 22, 2023 14:33
@tanmay-db tanmay-db mentioned this pull request Aug 25, 2023
5 tasks
nkvuong added a commit that referenced this pull request Sep 7, 2023
…2528)

* first draft

* add foreign catalog

* update doc

* Fixed `databricks_job` resource to clear instance-specific attributes when `instance_pool_id` is specified (#2507)

NodeTypeID cannot be set in jobsAPI.Update() if InstancePoolID is specified.
If both are specified, assume InstancePoolID takes precedence and NodeTypeID is only computed.

Closes #2502.
Closes #2141.

* Added `full_refresh` attribute to the `pipeline_task` in `databricks_job` (#2444)

This allows to force full refresh of the pipeline from the job.

This fixes #2362

* Configured merge queue for the provider (#2533)

* misc doc updates (#2516)

* Bump github.com/databricks/databricks-sdk-go from 0.13.0 to 0.14.1 (#2523)

Bumps [github.com/databricks/databricks-sdk-go](https://github.com/databricks/databricks-sdk-go) from 0.13.0 to 0.14.1.
- [Release notes](https://github.com/databricks/databricks-sdk-go/releases)
- [Changelog](https://github.com/databricks/databricks-sdk-go/blob/main/CHANGELOG.md)
- [Commits](databricks/databricks-sdk-go@v0.13.0...v0.14.1)

---
updated-dependencies:
- dependency-name: github.com/databricks/databricks-sdk-go
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Miles Yucht <miles@databricks.com>

* Fix IP ACL read (#2515)

* Add support for `USE_MARKETPLACE_ASSETS` privilege to metastore (#2505)

* Update docs to include USE_MARKETPLACE_ASSETS privilege

* Add USE_MARKETPLACE_ASSETS to metastore privileges

* Add git job_source to job resource (#2538)

* Add git job_source to job resource

* lint

* fix test

* Use go sdk type

* Allow search SQL Warehouses by name in `databricks_sql_warehouse` data source (#2458)

* Allow search SQL Warehouses by name in `databricks_sql_warehouse` data source

Right now it's possible to search only by the warehouse ID, but it's not always convenient
although it's possible by using `databricks_sql_warehouses` data source + explicit
filtering.  This PR adds a capability to search by either SQL warehouse name or ID.

This fixes #2443

* Update docs/data-sources/sql_warehouse.md

Co-authored-by: Miles Yucht <miles@databricks.com>

* Address review comments

also change documentation a bit to better match the data source - it was copied from the
resource as-is.

* More fixes from review

* code review comments

---------

Co-authored-by: Miles Yucht <miles@databricks.com>

* Late jobs support (aka health conditions) in `databricks_job` resource (#2496)

* Late jobs support (aka health conditions) in `databricks_job` resource

Added support for `health` block that is used to detect late jobs.  Also, this PR includes
following changes:

* Added `on_duration_warning_threshold_exceeded` attribute to email & webhook notifications (needed for late jobs support)
* Added `notification_settings` on a task level & use jobs & task notification structs from Go SDK
* Reorganized documentation for task block as it's getting more & more attributes

* Update docs/resources/job.md

Co-authored-by: Gabor Ratky <gabor.ratky@databricks.com>

* Update docs/resources/job.md

Co-authored-by: Gabor Ratky <gabor.ratky@databricks.com>

* Update docs/resources/job.md

Co-authored-by: Gabor Ratky <gabor.ratky@databricks.com>

* Update docs/resources/job.md

Co-authored-by: Gabor Ratky <gabor.ratky@databricks.com>

* Update docs/resources/job.md

Co-authored-by: Gabor Ratky <gabor.ratky@databricks.com>

* Update docs/resources/job.md

Co-authored-by: Gabor Ratky <gabor.ratky@databricks.com>

* Update docs/resources/job.md

Co-authored-by: Gabor Ratky <gabor.ratky@databricks.com>

* Update docs/resources/job.md

Co-authored-by: Gabor Ratky <gabor.ratky@databricks.com>

* Update docs/resources/job.md

Co-authored-by: Gabor Ratky <gabor.ratky@databricks.com>

* Update docs/resources/job.md

Co-authored-by: Gabor Ratky <gabor.ratky@databricks.com>

* address review comments

* add list of tasks

* more review chanes

---------

Co-authored-by: Gabor Ratky <gabor.ratky@databricks.com>
Co-authored-by: Miles Yucht <miles@databricks.com>

* feedback

* update struct

* add suppress diff

* fix suppress diff

* fix acceptance tests

* test feedback

* make id a pair

* better sensitive options handling

* reorder id pair

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: marekbrysa <53767523+marekbrysa@users.noreply.github.com>
Co-authored-by: Alex Ott <alexey.ott@databricks.com>
Co-authored-by: Miles Yucht <miles@databricks.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: bvdboom <bvdboom@users.noreply.github.com>
Co-authored-by: shreyas-goenka <88374338+shreyas-goenka@users.noreply.github.com>
Co-authored-by: Gabor Ratky <gabor.ratky@databricks.com>
nkvuong added a commit that referenced this pull request Oct 3, 2023
…2528)

* first draft

* add foreign catalog

* update doc

* Fixed `databricks_job` resource to clear instance-specific attributes when `instance_pool_id` is specified (#2507)

NodeTypeID cannot be set in jobsAPI.Update() if InstancePoolID is specified.
If both are specified, assume InstancePoolID takes precedence and NodeTypeID is only computed.

Closes #2502.
Closes #2141.

* Added `full_refresh` attribute to the `pipeline_task` in `databricks_job` (#2444)

This allows to force full refresh of the pipeline from the job.

This fixes #2362

* Configured merge queue for the provider (#2533)

* misc doc updates (#2516)

* Bump github.com/databricks/databricks-sdk-go from 0.13.0 to 0.14.1 (#2523)

Bumps [github.com/databricks/databricks-sdk-go](https://github.com/databricks/databricks-sdk-go) from 0.13.0 to 0.14.1.
- [Release notes](https://github.com/databricks/databricks-sdk-go/releases)
- [Changelog](https://github.com/databricks/databricks-sdk-go/blob/main/CHANGELOG.md)
- [Commits](databricks/databricks-sdk-go@v0.13.0...v0.14.1)

---
updated-dependencies:
- dependency-name: github.com/databricks/databricks-sdk-go
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Miles Yucht <miles@databricks.com>

* Fix IP ACL read (#2515)

* Add support for `USE_MARKETPLACE_ASSETS` privilege to metastore (#2505)

* Update docs to include USE_MARKETPLACE_ASSETS privilege

* Add USE_MARKETPLACE_ASSETS to metastore privileges

* Add git job_source to job resource (#2538)

* Add git job_source to job resource

* lint

* fix test

* Use go sdk type

* Allow search SQL Warehouses by name in `databricks_sql_warehouse` data source (#2458)

* Allow search SQL Warehouses by name in `databricks_sql_warehouse` data source

Right now it's possible to search only by the warehouse ID, but it's not always convenient
although it's possible by using `databricks_sql_warehouses` data source + explicit
filtering.  This PR adds a capability to search by either SQL warehouse name or ID.

This fixes #2443

* Update docs/data-sources/sql_warehouse.md

Co-authored-by: Miles Yucht <miles@databricks.com>

* Address review comments

also change documentation a bit to better match the data source - it was copied from the
resource as-is.

* More fixes from review

* code review comments

---------

Co-authored-by: Miles Yucht <miles@databricks.com>

* Late jobs support (aka health conditions) in `databricks_job` resource (#2496)

* Late jobs support (aka health conditions) in `databricks_job` resource

Added support for `health` block that is used to detect late jobs.  Also, this PR includes
following changes:

* Added `on_duration_warning_threshold_exceeded` attribute to email & webhook notifications (needed for late jobs support)
* Added `notification_settings` on a task level & use jobs & task notification structs from Go SDK
* Reorganized documentation for task block as it's getting more & more attributes

* Update docs/resources/job.md

Co-authored-by: Gabor Ratky <gabor.ratky@databricks.com>

* Update docs/resources/job.md

Co-authored-by: Gabor Ratky <gabor.ratky@databricks.com>

* Update docs/resources/job.md

Co-authored-by: Gabor Ratky <gabor.ratky@databricks.com>

* Update docs/resources/job.md

Co-authored-by: Gabor Ratky <gabor.ratky@databricks.com>

* Update docs/resources/job.md

Co-authored-by: Gabor Ratky <gabor.ratky@databricks.com>

* Update docs/resources/job.md

Co-authored-by: Gabor Ratky <gabor.ratky@databricks.com>

* Update docs/resources/job.md

Co-authored-by: Gabor Ratky <gabor.ratky@databricks.com>

* Update docs/resources/job.md

Co-authored-by: Gabor Ratky <gabor.ratky@databricks.com>

* Update docs/resources/job.md

Co-authored-by: Gabor Ratky <gabor.ratky@databricks.com>

* Update docs/resources/job.md

Co-authored-by: Gabor Ratky <gabor.ratky@databricks.com>

* address review comments

* add list of tasks

* more review chanes

---------

Co-authored-by: Gabor Ratky <gabor.ratky@databricks.com>
Co-authored-by: Miles Yucht <miles@databricks.com>

* feedback

* update struct

* add suppress diff

* fix suppress diff

* fix acceptance tests

* test feedback

* make id a pair

* better sensitive options handling

* reorder id pair

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: marekbrysa <53767523+marekbrysa@users.noreply.github.com>
Co-authored-by: Alex Ott <alexey.ott@databricks.com>
Co-authored-by: Miles Yucht <miles@databricks.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: bvdboom <bvdboom@users.noreply.github.com>
Co-authored-by: shreyas-goenka <88374338+shreyas-goenka@users.noreply.github.com>
Co-authored-by: Gabor Ratky <gabor.ratky@databricks.com>
github-merge-queue bot pushed a commit that referenced this pull request Dec 18, 2023
* refactor to support account-level ip acl

* add doc

* add acceptance tests

* Refactor `databricks_schema` to Go SDK (#2572)

* refactor `databricks_schema` to Go SDK

* clean up

* Refactor `databricks_external_location` to Go SDK (#2546)

* refactor external location to Go SDK

* keep force_new

* switch struct to sdk

* Refresh `databricks_grants` with latest permissible grants (#2567)

* refresh latest grants

* fix test

* fix typo

* add test

* Fixed databricks_access_control_rule_set integration test in Azure (#2591)

* Update Go SDK to v0.17.0 (#2599)

* Update Go SDK to 0.17.0

* go mod tidy

* RunJobTask job_id type fix (#2588)

* fix run job id bug

* update tests

* update exporter tests

* Add `databricks_connection` resource to support Lakehouse Federation (#2528)

* first draft

* add foreign catalog

* update doc

* Fixed `databricks_job` resource to clear instance-specific attributes when `instance_pool_id` is specified (#2507)

NodeTypeID cannot be set in jobsAPI.Update() if InstancePoolID is specified.
If both are specified, assume InstancePoolID takes precedence and NodeTypeID is only computed.

Closes #2502.
Closes #2141.

* Added `full_refresh` attribute to the `pipeline_task` in `databricks_job` (#2444)

This allows to force full refresh of the pipeline from the job.

This fixes #2362

* Configured merge queue for the provider (#2533)

* misc doc updates (#2516)

* Bump github.com/databricks/databricks-sdk-go from 0.13.0 to 0.14.1 (#2523)

Bumps [github.com/databricks/databricks-sdk-go](https://github.com/databricks/databricks-sdk-go) from 0.13.0 to 0.14.1.
- [Release notes](https://github.com/databricks/databricks-sdk-go/releases)
- [Changelog](https://github.com/databricks/databricks-sdk-go/blob/main/CHANGELOG.md)
- [Commits](databricks/databricks-sdk-go@v0.13.0...v0.14.1)

---
updated-dependencies:
- dependency-name: github.com/databricks/databricks-sdk-go
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Miles Yucht <miles@databricks.com>

* Fix IP ACL read (#2515)

* Add support for `USE_MARKETPLACE_ASSETS` privilege to metastore (#2505)

* Update docs to include USE_MARKETPLACE_ASSETS privilege

* Add USE_MARKETPLACE_ASSETS to metastore privileges

* Add git job_source to job resource (#2538)

* Add git job_source to job resource

* lint

* fix test

* Use go sdk type

* Allow search SQL Warehouses by name in `databricks_sql_warehouse` data source (#2458)

* Allow search SQL Warehouses by name in `databricks_sql_warehouse` data source

Right now it's possible to search only by the warehouse ID, but it's not always convenient
although it's possible by using `databricks_sql_warehouses` data source + explicit
filtering.  This PR adds a capability to search by either SQL warehouse name or ID.

This fixes #2443

* Update docs/data-sources/sql_warehouse.md

Co-authored-by: Miles Yucht <miles@databricks.com>

* Address review comments

also change documentation a bit to better match the data source - it was copied from the
resource as-is.

* More fixes from review

* code review comments

---------

Co-authored-by: Miles Yucht <miles@databricks.com>

* Late jobs support (aka health conditions) in `databricks_job` resource (#2496)

* Late jobs support (aka health conditions) in `databricks_job` resource

Added support for `health` block that is used to detect late jobs.  Also, this PR includes
following changes:

* Added `on_duration_warning_threshold_exceeded` attribute to email & webhook notifications (needed for late jobs support)
* Added `notification_settings` on a task level & use jobs & task notification structs from Go SDK
* Reorganized documentation for task block as it's getting more & more attributes

* Update docs/resources/job.md

Co-authored-by: Gabor Ratky <gabor.ratky@databricks.com>

* Update docs/resources/job.md

Co-authored-by: Gabor Ratky <gabor.ratky@databricks.com>

* Update docs/resources/job.md

Co-authored-by: Gabor Ratky <gabor.ratky@databricks.com>

* Update docs/resources/job.md

Co-authored-by: Gabor Ratky <gabor.ratky@databricks.com>

* Update docs/resources/job.md

Co-authored-by: Gabor Ratky <gabor.ratky@databricks.com>

* Update docs/resources/job.md

Co-authored-by: Gabor Ratky <gabor.ratky@databricks.com>

* Update docs/resources/job.md

Co-authored-by: Gabor Ratky <gabor.ratky@databricks.com>

* Update docs/resources/job.md

Co-authored-by: Gabor Ratky <gabor.ratky@databricks.com>

* Update docs/resources/job.md

Co-authored-by: Gabor Ratky <gabor.ratky@databricks.com>

* Update docs/resources/job.md

Co-authored-by: Gabor Ratky <gabor.ratky@databricks.com>

* address review comments

* add list of tasks

* more review chanes

---------

Co-authored-by: Gabor Ratky <gabor.ratky@databricks.com>
Co-authored-by: Miles Yucht <miles@databricks.com>

* feedback

* update struct

* add suppress diff

* fix suppress diff

* fix acceptance tests

* test feedback

* make id a pair

* better sensitive options handling

* reorder id pair

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: marekbrysa <53767523+marekbrysa@users.noreply.github.com>
Co-authored-by: Alex Ott <alexey.ott@databricks.com>
Co-authored-by: Miles Yucht <miles@databricks.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: bvdboom <bvdboom@users.noreply.github.com>
Co-authored-by: shreyas-goenka <88374338+shreyas-goenka@users.noreply.github.com>
Co-authored-by: Gabor Ratky <gabor.ratky@databricks.com>

* Documentation changes (#2576)

* add troubleshooting guide for grants/permissions config drifts

* update authentication in index

* feedback

* Exporter: Incremental export of notebooks, SQL objects and some other resources (#2563)

* First pass on incremental export of notebooks/files & SQL objects

* use optional modified_at field

* Add `-incremental` & `-update-since` command-line options

* Incremental generation of `import.sh` and `vars.tf`

* Incremental export of the TF resources themselves

* Added incremental support for the rest of the objects

also updated documentation & compatibility matrix

* Add tests for incremental generation

* Add clarification about periodic full export

* Store last run timestamp in the file on disk and use with `-incremental`

* Address initial review comments

* Incremental export of MLflow Webhooks, expanded tests to cover merge of variables

* fix test

* Fix reflection method marshallJSON for CMK in mws workspace (#2605)

* Fix reflection method marshallJSON for CMK in mws workspace

* Add UT

* Add missing documentation for CMK support on GCP (#2604)

* add CMK on GCP to docs

* feedback

* Add `owner` parameter to `databricks_share` resource (#2594)

* add `owner` parameter to `databricks_share`

* suppress diff

* Exporter: command-line option to control output format for notebooks (#2569)

New command-line option `-notebooksFormat` allows to export notebooks in DBC and IPython formats.

This fixes #2568

* Fix creation of views with comments using `databricks_sql_table` (#2589)

* mark column type as omitempty

* add acc test

* escape names for sql

* add test & suppress_diff

* fix tests

* fix acc test

* update doc

* fix acc tests

* fix doc

* Add account-level API support for Unity Catalog objects (#2182)

* first draft

* account client check

* account client check

* Fixed `databricks_service_principals` data source issue with empty filter (#2185)

* fix `databricks_service_principals` data source issue with empty filter

* fix acc tests

* Allow rotating `token` block in `databricks_mws_workspaces` resource by only changing `comment` field (#2114)

Tested manually for the following cases

Without this PR the provider recreates the entire workspace on a token update
With changes in this PR only the token is refreshed
When both token and storage_configuration_id are changed then the entire workspace is recreated
Additional unit tests also added that allow checks that patch workspace calls are not made when only token is changed

Also added an integration test to check tokens are successfully updated

* Excludes roles in scim API list calls to reduce load on databricks scim service (#2181)

* Exclude roles in scim API list calls

* more test fixes

* Update SDK to v0.6.0 (#2186)

* Update SDK to v0.6.0

* go mod tidy

* update sdk to 0.7.0

* add integration tests

* fix acceptance tests

* fix tests

* add account-level API support for `metastore_data_access`

* add account API support for `databricks_storage_credential`

* address feedback

* refactor to `WorkspaceOrAccountRequest`

* fix acceptance tests

* Release v1.21.0 (#2471)

Release v1.21.0 of the Terraform Provider for Databricks.

 * Added condition_task to the [`databricks_job`](https://registry.terraform.io/providers/databricks/databricks/latest/docs/resources/job) resource (private preview) ([#2459](#2459)).
 * Added `AccountData`, `AccountClient` and define generic databricks data utilites for defining workspace and account-level data sources ([#2429](#2429)).
 * Added documentation link to existing Databricks Terraform modules ([#2439](#2439)).
 * Added experimental compute field to [databricks_job](https://registry.terraform.io/providers/databricks/databricks/latest/docs/resources/job) resource ([#2401](#2401)).
 * Added import example to doc for [databricks_group_member](https://registry.terraform.io/providers/databricks/databricks/latest/docs/resources/group_member) resource ([#2453](#2453)).
 * Added support for subscriptions in dashboards & alert SQL tasks in [databricks_job](https://registry.terraform.io/providers/databricks/databricks/latest/docs/resources/job) ([#2447](#2447)).
 * Fixed model serving integration test ([#2460](#2460), [#2461](#2461)).
 * Fixed [databricks_job](https://registry.terraform.io/providers/databricks/databricks/latest/docs/resources/job) resource file arrival trigger parameter name ([#2438](#2438)).
 * Fixed catalog_workspace_binding_test ([#2463](#2463), [#2451](#2451)).

No breaking changes in this release.

* Install mlflow cluster using  in model serving test if the cluster is already running (#2470)

* Bump golang.org/x/mod from 0.11.0 to 0.12.0 (#2462)

Bumps [golang.org/x/mod](https://github.com/golang/mod) from 0.11.0 to 0.12.0.
- [Commits](golang/mod@v0.11.0...v0.12.0)

---
updated-dependencies:
- dependency-name: golang.org/x/mod
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* Exporter: make resource names more unique to avoid duplicate resources errors (#2452)

This includes following changes:

* Add user ID to the `databricks_user` resource name to avoid clashes on names like, `user+1` and `user_1`
* Add user/sp/group ID to the name of the `databricks_group_member` resource
* Remove too aggressive name normalization pattern that also leads to the generation of
  the duplicate resource names for different resources

* Add documentation notes about legacy cluster type & data access (#2437)

* Add documentation notes about legacy cluster type & data access

* Update docs/resources/cluster.md

Co-authored-by: Miles Yucht <miles@databricks.com>

* Update docs/resources/mount.md

Co-authored-by: Miles Yucht <miles@databricks.com>

---------

Co-authored-by: Miles Yucht <miles@databricks.com>

* Use random catalog name in SQL table integration tests (#2473)

The fixed value prevented concurrent integration test runs.

* Link model serving docs to top level README (#2474)

* Add one more item to the troubleshooting guide (#2477)

It's related to use OAuth for authentication but not providing `account_id` in the
provider configuration.

* Added `databricks_access_control_rule_set` resource for managing account-level access (#2371)

* Added `acl_principal_id` attribute to `databricks_user`, `databricks_group` & `databricks_service_principal` for easier use with `databricks_access_control_rule_set` (#2485)

It should simplify specification of principals in the `databricks_access_control_rule_set`
so instead of this (string with placeholders):

```
   grant_rules {
     principals = ["groups/${databricks_group.ds.display_name}"]
     role       = "roles/servicePrincipal.user"
   }
```

it will be simpler to refer like this:

```
   grant_rules {
     principals = [databricks_group.ds.acl_principal_id]
     role       = "roles/servicePrincipal.user"
   }
```

* Added support for Unity Catalog `databricks_metastores` data source  (#2017)

* add documentation for databricks_metastores data source

* add API endpoint for listing metastores

* add metastores data resource

* add test for metastores data source

* add metastores datasource to resource mapping

* fix reference to wrong resource docs

* add a Metastores struct for the response of the API, use this in the datSource

* update terraform specific object attributes

* add new data test

* remove slice_set property from MetastoreData

* use databricks-go-sdk for data_metastore.go

* removed listMetastores endpoint since it's unused

* make sure tests also use the unitycatalog.MetastoreInfo from the sdk

* remove redundant resource

* test -dev

* fix

* fmt

* cleanup

* Added AccountClient to DatabricksClient and AccountData

* upd

* cleanup

* accountLevel

* upd

* add example

* list

* cleanup

* docs

* remove dead code

* wip

* use maps

* upd

* cleanup

* comments

* -

* remove redundant test

---------

Co-authored-by: Tanmay Rustagi <tanmay.rustagi@databricks.com>
Co-authored-by: vuong-nguyen <44292934+nkvuong@users.noreply.github.com>

* Added support for Unity Catalog `databricks_metastore` data source (#2492)

Enable fetching account level metastore information through id for a single metastore.

* Supported new Delve binary name format (#2497)

https://github.com/go-delve/delve/blob/master/CHANGELOG.md#1210-2023-06-23 changes the naming of the delve debug binary. This PR changes isInDebug to accommodate old and new versions of Delve.

* Add code owners for Terraform (#2498)

* Removed unused dlvLoadConfig configuration from settings.json (#2499)

* Fix provider after updating SDK to 0.13 (#2494)

* Fix provider after updating SDK to 0.13

* add unit test

* split test

* Added `control_run_state` flag to the `databricks_job` resource for continuous jobs (#2466)

This PR introduces a new flag, control_run_state, to replace the always_running flag. This flag only applies to continuous jobs. Its behavior is described below:

For jobs with pause_status = PAUSED, it is a no-op on create and stops the active job run on update (if applicable).
For jobs with pause_status = UNPAUSED, it starts a job run on create and stops the active job run on update (if applicable).
The job does not need to be started, as that is handled by the Jobs service itself.

This fixes #2130.

* Added exporter for `databricks_workspace_file` resource (#2493)

* Preliminary changes to make workspace files implementation

- make `NotebooksAPI.List` to return directories as well when called in the recursive
  mode (same as non-recursive behavior)
- Because of that, remove the separate `ListDirectories`
- Extend `workspace.ObjectStatus` with additional fields (will be required for
  incremental notebooks export)
- Cache listing of all workspace objects, and then use it for all operations - list
  notebooks, list directories, list workspace files

* Added exporting of workspace files

---------

Co-authored-by: Miles Yucht <miles@databricks.com>

* Supported boolean values in `databricks_sql_alert` alerts (#2506)

* Added more common issues for troubleshooting (#2486)

* add troubleshooting

* fix doc category

---------

Co-authored-by: Miles Yucht <miles@databricks.com>

* Fixed handling of comments in `databricks_sql_table` resource (#2472)

* column comments and single quote escape

* Delimiter collision avoidance table comment

* compatible with user single quote escape

* unit tests for parseComment

* corrected fmt

---------

Co-authored-by: Miles Yucht <miles@databricks.com>

* Added clarification that `databricks_schema` and `databricks_sql_table` should be imported by their full name, not just by name (#2491)

Co-authored-by: Miles Yucht <miles@databricks.com>

* Updated `databricks_user` with `force = true` to check for error message prefix (#2510)

This fixes #2500

* fix force delete

* remove orphaned code

* fix acceptance tests

* upgrade go sdk

* fix metastoreinfo struct

* docs update

* fix acceptance tests

* fix tests

* updated docs

* fix tests

* rename test

* update tests

* fix tests

* fix test

* add state upgrader

* fix struct

* fix tests

* feedback

* feedback

* fix acc test

* fix test

* fix test

* fix test

* feedback

* fix acc tests

* feedback

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: shreyas-goenka <88374338+shreyas-goenka@users.noreply.github.com>
Co-authored-by: Serge Smertin <259697+nfx@users.noreply.github.com>
Co-authored-by: Miles Yucht <miles@databricks.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Alex Ott <alexey.ott@databricks.com>
Co-authored-by: Pieter Noordhuis <pieter.noordhuis@databricks.com>
Co-authored-by: Gautham Sunjay <gauthamsunjay17@gmail.com>
Co-authored-by: guillesd <74136033+guillesd@users.noreply.github.com>
Co-authored-by: Tanmay Rustagi <tanmay.rustagi@databricks.com>
Co-authored-by: Tanmay Rustagi <88379306+tanmay-db@users.noreply.github.com>
Co-authored-by: Fabian Jakobs <fabian.jakobs@databricks.com>
Co-authored-by: klus <lus.karol@gmail.com>

* Fix UC acceptance test (#2613)

* fix acc test

* remove deprecated field from sdk

* Release v1.24.0 (#2614)

* Release v1.24.0

* Release v1.24.0

* Release v1.24.0

* Release v1.24.0

* Fixed verification of workspace reachability by using scim/me which is always available  (#2618)

* add flag to skip verification

* cleanup

* cleanup

* -

* test

* test

* test

* test

* -

* -

* Release v1.24.1 (#2625)

* Release v1.24.1

* go upd

* new line

* Add doc strings for ResourceFixtures (#2633)

* Add doc strings for ResourceFixtures

* fmt

* Update qa/testing.go

Co-authored-by: Miles Yucht <miles@databricks.com>

* Update qa/testing.go

Co-authored-by: Miles Yucht <miles@databricks.com>

---------

Co-authored-by: Miles Yucht <miles@databricks.com>

* Bump github.com/hashicorp/hcl/v2 from 2.17.0 to 2.18.0 (#2636)

Bumps [github.com/hashicorp/hcl/v2](https://github.com/hashicorp/hcl) from 2.17.0 to 2.18.0.
- [Release notes](https://github.com/hashicorp/hcl/releases)
- [Changelog](https://github.com/hashicorp/hcl/blob/main/CHANGELOG.md)
- [Commits](hashicorp/hcl@v2.17.0...v2.18.0)

---
updated-dependencies:
- dependency-name: github.com/hashicorp/hcl/v2
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* terrafmt; updated share and share recipient docs (#2641)

* update documentation (#2644)

* fix response

* remove preview path

* rename test

* fix create call

* add wait for acceptance tests

* fix test

* feedback

* use golang struct

* add wait time for acc test

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: Miles Yucht <miles@databricks.com>
Co-authored-by: Krishna Swaroop K <krishna.swaroop@databricks.com>
Co-authored-by: marekbrysa <53767523+marekbrysa@users.noreply.github.com>
Co-authored-by: Alex Ott <alexey.ott@databricks.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: bvdboom <bvdboom@users.noreply.github.com>
Co-authored-by: shreyas-goenka <88374338+shreyas-goenka@users.noreply.github.com>
Co-authored-by: Gabor Ratky <gabor.ratky@databricks.com>
Co-authored-by: Rohan Kabra <rohan.kabra@databricks.com>
Co-authored-by: Serge Smertin <259697+nfx@users.noreply.github.com>
Co-authored-by: Pieter Noordhuis <pieter.noordhuis@databricks.com>
Co-authored-by: Gautham Sunjay <gauthamsunjay17@gmail.com>
Co-authored-by: guillesd <74136033+guillesd@users.noreply.github.com>
Co-authored-by: Tanmay Rustagi <tanmay.rustagi@databricks.com>
Co-authored-by: Tanmay Rustagi <88379306+tanmay-db@users.noreply.github.com>
Co-authored-by: Fabian Jakobs <fabian.jakobs@databricks.com>
Co-authored-by: klus <lus.karol@gmail.com>
Co-authored-by: Oleh Mykolaishyn <owlleg6@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[FEATURE] Add Lakehouse Federation Resources
9 participants