Skip to content

Commit

Permalink
Add databricks_connection resource to support Lakehouse Federation (#…
Browse files Browse the repository at this point in the history
…2528)

* first draft

* add foreign catalog

* update doc

* Fixed `databricks_job` resource to clear instance-specific attributes when `instance_pool_id` is specified (#2507)

NodeTypeID cannot be set in jobsAPI.Update() if InstancePoolID is specified.
If both are specified, assume InstancePoolID takes precedence and NodeTypeID is only computed.

Closes #2502.
Closes #2141.

* Added `full_refresh` attribute to the `pipeline_task` in `databricks_job` (#2444)

This allows to force full refresh of the pipeline from the job.

This fixes #2362

* Configured merge queue for the provider (#2533)

* misc doc updates (#2516)

* Bump github.com/databricks/databricks-sdk-go from 0.13.0 to 0.14.1 (#2523)

Bumps [github.com/databricks/databricks-sdk-go](https://github.com/databricks/databricks-sdk-go) from 0.13.0 to 0.14.1.
- [Release notes](https://github.com/databricks/databricks-sdk-go/releases)
- [Changelog](https://github.com/databricks/databricks-sdk-go/blob/main/CHANGELOG.md)
- [Commits](databricks/databricks-sdk-go@v0.13.0...v0.14.1)

---
updated-dependencies:
- dependency-name: github.com/databricks/databricks-sdk-go
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Miles Yucht <miles@databricks.com>

* Fix IP ACL read (#2515)

* Add support for `USE_MARKETPLACE_ASSETS` privilege to metastore (#2505)

* Update docs to include USE_MARKETPLACE_ASSETS privilege

* Add USE_MARKETPLACE_ASSETS to metastore privileges

* Add git job_source to job resource (#2538)

* Add git job_source to job resource

* lint

* fix test

* Use go sdk type

* Allow search SQL Warehouses by name in `databricks_sql_warehouse` data source (#2458)

* Allow search SQL Warehouses by name in `databricks_sql_warehouse` data source

Right now it's possible to search only by the warehouse ID, but it's not always convenient
although it's possible by using `databricks_sql_warehouses` data source + explicit
filtering.  This PR adds a capability to search by either SQL warehouse name or ID.

This fixes #2443

* Update docs/data-sources/sql_warehouse.md

Co-authored-by: Miles Yucht <miles@databricks.com>

* Address review comments

also change documentation a bit to better match the data source - it was copied from the
resource as-is.

* More fixes from review

* code review comments

---------

Co-authored-by: Miles Yucht <miles@databricks.com>

* Late jobs support (aka health conditions) in `databricks_job` resource (#2496)

* Late jobs support (aka health conditions) in `databricks_job` resource

Added support for `health` block that is used to detect late jobs.  Also, this PR includes
following changes:

* Added `on_duration_warning_threshold_exceeded` attribute to email & webhook notifications (needed for late jobs support)
* Added `notification_settings` on a task level & use jobs & task notification structs from Go SDK
* Reorganized documentation for task block as it's getting more & more attributes

* Update docs/resources/job.md

Co-authored-by: Gabor Ratky <gabor.ratky@databricks.com>

* Update docs/resources/job.md

Co-authored-by: Gabor Ratky <gabor.ratky@databricks.com>

* Update docs/resources/job.md

Co-authored-by: Gabor Ratky <gabor.ratky@databricks.com>

* Update docs/resources/job.md

Co-authored-by: Gabor Ratky <gabor.ratky@databricks.com>

* Update docs/resources/job.md

Co-authored-by: Gabor Ratky <gabor.ratky@databricks.com>

* Update docs/resources/job.md

Co-authored-by: Gabor Ratky <gabor.ratky@databricks.com>

* Update docs/resources/job.md

Co-authored-by: Gabor Ratky <gabor.ratky@databricks.com>

* Update docs/resources/job.md

Co-authored-by: Gabor Ratky <gabor.ratky@databricks.com>

* Update docs/resources/job.md

Co-authored-by: Gabor Ratky <gabor.ratky@databricks.com>

* Update docs/resources/job.md

Co-authored-by: Gabor Ratky <gabor.ratky@databricks.com>

* address review comments

* add list of tasks

* more review chanes

---------

Co-authored-by: Gabor Ratky <gabor.ratky@databricks.com>
Co-authored-by: Miles Yucht <miles@databricks.com>

* feedback

* update struct

* add suppress diff

* fix suppress diff

* fix acceptance tests

* test feedback

* make id a pair

* better sensitive options handling

* reorder id pair

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: marekbrysa <53767523+marekbrysa@users.noreply.github.com>
Co-authored-by: Alex Ott <alexey.ott@databricks.com>
Co-authored-by: Miles Yucht <miles@databricks.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: bvdboom <bvdboom@users.noreply.github.com>
Co-authored-by: shreyas-goenka <88374338+shreyas-goenka@users.noreply.github.com>
Co-authored-by: Gabor Ratky <gabor.ratky@databricks.com>
  • Loading branch information
8 people committed Sep 7, 2023
1 parent 3edd45c commit b2ca75d
Show file tree
Hide file tree
Showing 7 changed files with 531 additions and 9 deletions.
19 changes: 10 additions & 9 deletions catalog/resource_catalog.go
Original file line number Diff line number Diff line change
Expand Up @@ -27,15 +27,16 @@ func ucDirectoryPathSlashAndEmptySuppressDiff(k, old, new string, d *schema.Reso
}

type CatalogInfo struct {
Name string `json:"name"`
Comment string `json:"comment,omitempty"`
StorageRoot string `json:"storage_root,omitempty" tf:"force_new"`
ProviderName string `json:"provider_name,omitempty" tf:"force_new,conflicts:storage_root"`
ShareName string `json:"share_name,omitempty" tf:"force_new,conflicts:storage_root"`
Properties map[string]string `json:"properties,omitempty"`
Owner string `json:"owner,omitempty" tf:"computed"`
IsolationMode string `json:"isolation_mode,omitempty" tf:"computed"`
MetastoreID string `json:"metastore_id,omitempty" tf:"computed"`
Name string `json:"name"`
Comment string `json:"comment,omitempty"`
StorageRoot string `json:"storage_root,omitempty" tf:"force_new"`
ProviderName string `json:"provider_name,omitempty" tf:"force_new,conflicts:storage_root"`
ShareName string `json:"share_name,omitempty" tf:"force_new,conflicts:storage_root"`
ConnectionName string `json:"connection_name,omitempty" tf:"force_new,conflicts:storage_root"`
Properties map[string]string `json:"properties,omitempty"`
Owner string `json:"owner,omitempty" tf:"computed"`
IsolationMode string `json:"isolation_mode,omitempty" tf:"computed"`
MetastoreID string `json:"metastore_id,omitempty" tf:"computed"`
}

func ResourceCatalog() *schema.Resource {
Expand Down
121 changes: 121 additions & 0 deletions catalog/resource_connection.go
Original file line number Diff line number Diff line change
@@ -0,0 +1,121 @@
package catalog

import (
"context"

"github.com/databricks/databricks-sdk-go/service/catalog"
"github.com/databricks/terraform-provider-databricks/common"
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
"golang.org/x/exp/slices"
)

// This structure contains the fields of catalog.UpdateConnection and catalog.CreateConnection
// We need to create this because we need Owner, FullNameArg, SchemaName and CatalogName which aren't present in a single of them.
// We also need to annotate tf:"computed" for the Owner field.
type ConnectionInfo struct {
// User-provided free-form text description.
Comment string `json:"comment,omitempty" tf:"force_new"`
// The type of connection.
ConnectionType string `json:"connection_type" tf:"force_new"`
// Unique identifier of parent metastore.
MetastoreId string `json:"metastore_id,omitempty" tf:"computed"`
// Name of the connection.
Name string `json:"name"`
// Name of the connection.
NameArg string `json:"-" url:"-"`
// A map of key-value properties attached to the securable.
Options map[string]string `json:"options" tf:"sensitive"`
// Username of current owner of the connection.
Owner string `json:"owner,omitempty" tf:"force_new,suppress_diff"`
// An object containing map of key-value properties attached to the
// connection.
Properties map[string]string `json:"properties,omitempty" tf:"force_new"`
// If the connection is read only.
ReadOnly bool `json:"read_only,omitempty" tf:"force_new,computed"`
}

var sensitiveOptions = []string{"user", "password", "personalAccessToken", "access_token", "client_secret", "OAuthPvtKey"}

func ResourceConnection() *schema.Resource {
s := common.StructToSchema(ConnectionInfo{},
func(m map[string]*schema.Schema) map[string]*schema.Schema {
return m
})
pi := common.NewPairID("metastore_id", "name").Schema(
func(m map[string]*schema.Schema) map[string]*schema.Schema {
return s
})
return common.Resource{
Schema: s,
Create: func(ctx context.Context, d *schema.ResourceData, c *common.DatabricksClient) error {
w, err := c.WorkspaceClient()
if err != nil {
return err
}
var createConnectionRequest catalog.CreateConnection
common.DataToStructPointer(d, s, &createConnectionRequest)
conn, err := w.Connections.Create(ctx, createConnectionRequest)
if err != nil {
return err
}
d.Set("metastore_id", conn.MetastoreId)
pi.Pack(d)
return nil
},
Read: func(ctx context.Context, d *schema.ResourceData, c *common.DatabricksClient) error {
w, err := c.WorkspaceClient()
if err != nil {
return err
}
_, connName, err := pi.Unpack(d)
if err != nil {
return err
}
conn, err := w.Connections.GetByNameArg(ctx, connName)
if err != nil {
return err
}
// We need to preserve original sensitive options as API doesn't return them
var cOrig catalog.CreateConnection
common.DataToStructPointer(d, s, &cOrig)
for key, element := range cOrig.Options {
if slices.Contains(sensitiveOptions, key) {
conn.Options[key] = element
}
}
return common.StructToData(conn, s, d)
},
Update: func(ctx context.Context, d *schema.ResourceData, c *common.DatabricksClient) error {
w, err := c.WorkspaceClient()
if err != nil {
return err
}
var updateConnectionRequest catalog.UpdateConnection
common.DataToStructPointer(d, s, &updateConnectionRequest)
_, connName, err := pi.Unpack(d)
updateConnectionRequest.NameArg = connName
if err != nil {
return err
}
conn, err := w.Connections.Update(ctx, updateConnectionRequest)
if err != nil {
return err
}
// We need to repack the Id as the name may have changed
d.Set("name", conn.Name)
pi.Pack(d)
return nil
},
Delete: func(ctx context.Context, d *schema.ResourceData, c *common.DatabricksClient) error {
w, err := c.WorkspaceClient()
if err != nil {
return err
}
_, connName, err := pi.Unpack(d)
if err != nil {
return err
}
return w.Connections.DeleteByNameArg(ctx, connName)
},
}.ToResource()
}
Loading

0 comments on commit b2ca75d

Please sign in to comment.