This repository has been archived by the owner on Feb 10, 2021. It is now read-only.
-
-
Notifications
You must be signed in to change notification settings - Fork 22
worker_info dict has no key 'name' #95
Comments
I suspect that this is due to a change upstream in dask/distributed that
wasn't mirrored in this library. Probably this should be looking at
self.scheduler.workers.items() and then maybe at the `name` attribute?
Perhaps something else though like host or address (I'm no longer very
familiar with this code).
Three solutions here:
1. Pin to an earlier version of distributed
2. Submit a PR to bring dask-drmaa up to date
3. Use dask-jobqueue instead, which seems to be under more active
maintenance today
…On Thu, Oct 25, 2018 at 12:32 PM Maximilian Nöthe ***@***.***> wrote:
On cluster.close(), this error is thrown on current master:
Traceback (most recent call last):
File "/home/smmanoet/.local/lib/python3.6/site-packages/tornado/gen.py", line 326, in wrapper
yielded = next(result)
File "/home/smmanoet/.local/lib/python3.6/site-packages/dask_drmaa/core.py", line 285, in stop_workers
v['name']: k for k, v in self.scheduler.worker_info.items()
File "/home/smmanoet/.local/lib/python3.6/site-packages/dask_drmaa/core.py", line 285, in <dictcomp>
v['name']: k for k, v in self.scheduler.worker_info.items()
KeyError: 'name'
I looked at the entries of worker_info, they only have the keys
cpu
memory
time
read_bytes
write_bytes
num_fds
executing
in_memory
ready
in_flight
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#95>, or mute the thread
<https://github.com/notifications/unsubscribe-auth/AASszB9czARNVi7PeS_hiqR8NkNTsNn6ks5uoeeQgaJpZM4X6qgX>
.
|
Hmm...have been pretty happily staying up-to-date with Dask and Distributed without many changes in this library for a while. There may very well be a bug, but not one I've seen it seems. Would happily accept a PR though. Edit: Should add I'm using the last stable release of dask-drmaa as opposed to |
So ran into this recently. Maybe there was something off with my configuration before? FWICT this line isn't really needed. The workers are shutdown through other means anyways. So maybe we should just remove it? |
Sign up for free
to subscribe to this conversation on GitHub.
Already have an account?
Sign in.
On
cluster.close()
, this error is thrown on current master:I looked at the entries of
worker_info
, they only have the keysThe text was updated successfully, but these errors were encountered: