-
-
Notifications
You must be signed in to change notification settings - Fork 143
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Detach from running cluster #186
Comments
I think that @jcrist has conventions for this in dask-gateway and
dask-yarn.
…On Mon, Oct 14, 2019 at 6:07 AM Jacob Tomlinson ***@***.***> wrote:
The current behaviour of dask-kubernetes is to create ephemeral clusters
and use them to perform calculation. There is a bunch of logic here to
clean up the cluster when the Python object is removed.
This may not always be the desired use case and we may want to support
detaching from a cluster. This also fits with #185
<#185> where we may want to
attach to an already running cluster.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#186?email_source=notifications&email_token=AACKZTHB3EJF3SG5HWVCMTLQORHHDA5CNFSM4JANVI6KYY3PNVWWK3TUL52HS4DFUVEXG43VMWVGG33NNVSW45C7NFSM4HRR4F5A>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AACKZTAR4Q5HAB7EJTFTVDTQORHHDANCNFSM4JANVI6A>
.
|
We use the keyword
The # On .close()/GC of cluster object, remote cluster is cleaned up
cluster = GatewayCluster()
cluster.close()
# On .close()/GC of cluster object, only local resources are cleaned up
cluster = GatewayCluster(shutdown_on_close=False)
cluster.close()
# You can override this behavior by passing `shutdown=True` to close
# or using the `shutdown()` method which is an alias to this
cluster.close(shutdown=True)
cluster.shutdown()
# When connecting to a previously started cluster, the default for
# shutdown_on_close is False
cluster = GatewayCluster.from_name(cluster_name) This option only makes sense if the scheduler is remote (which is always true for dask-gateway). |
Thanks @jcrist this is the kind of thing I want to implement here. Now that we have the option of creating remote schedulers it would be good to implement this functionality. |
I'm going to close this out as being closely related to #185. |
The current behaviour of dask-kubernetes is to create ephemeral clusters and use them to perform calculation. There is a bunch of logic here to clean up the cluster when the Python object is removed.
This may not always be the desired use case and we may want to support detaching from a cluster. This also fits with #185 where we may want to attach to an already running cluster.
The text was updated successfully, but these errors were encountered: