-
Notifications
You must be signed in to change notification settings - Fork 227
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
More flexible cluster configuration #467
Conversation
Thank you for your pull request! We could not find a changelog entry for this change. For details on how to document a change, see the dbt-spark contributing guide. |
@jtcohen6 @lostmygithubaccount is the current overwriting logic odd? |
904bbd1
to
37c5377
Compare
|
||
@property | ||
def cluster_id(self) -> str: | ||
return self.parsed_model.get("cluster_id", self.credentials.cluster_id) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ChenyuLInx I'm updating and testing dbt-databricks
, and found out that this is a mistake.
I think this should be:
self.parsed_model["config"].get("cluster_id", self.credentials.cluster_id)
### Description Follows "More flexible cluster configuration" at dbt-labs/dbt-spark#467. - Reuse `dbt-spark`'s implementation - Remove the dependency on `databricks-cli` - Internal refactorings Co-authored-by: allisonwang-db <allison.wang@databricks.com>
resolves #444
Description
When using notebook submission, if
job_cluster_config
is specified, we will run that model with a job_cluster.This PR also makes user being able to specify a separate
cluster_id
orjob_cluster_config
for each individual model through config.job_cluster_config
will overwritecluster_id
in current situation.(ifjob_cluster_config
is set for a model, we will always use it).This PR also removes the need for
user
and put dbt model files under/dbt_python_mode/{$SCHEMA}
in Databricks workspaceChecklist
changie new
to create a changelog entry