-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support for existing_cluster_id in DatabricksNotebookOperator #73
Support for existing_cluster_id in DatabricksNotebookOperator #73
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM in general. Have a question inline.
@dimberman would you please be able to offer additional pair of eyes and help review the PR if possible?
@Hang1225 I modified the PR description and commit message to elaborate further. Next, I will work soon on releasing this fix. Thanks a lot for reporting the issue and contributing a fix, appreciate it 👏🏽 |
Hi @Hang1225 we just released 0.2.1 of the provider https://pypi.org/project/astro-provider-databricks/0.2.1/ which should include this PR. Please try out the new released version and also please let us know if it helps your use case. Thanks again! |
When tasks are launched with
DatabricksNotebookOperators
from within a TaskGroupusing the
DatabricksWorkflowTaskGroup
, currently we do not support usingexisting_cluster_id
for those Notebook tasks. The PR addresses this issue by allowing to support
existing_cluster_id
in such cases and additionally also keeps supporting the currentjob_cluster_key
approach allowing users to use a combination of both for a workflow.closes: #70