Conversation
|
Can one of the admins verify this patch? |
|
Can you please add a test for this, see https://github.com/dask/distributed/blob/main/distributed/tests/test_variable.py I noticed the linting is failing. Please have a look at http://distributed.dask.org/en/stable/develop.html#code-formatting |
|
I will do. Thanks |
Unit Test Results 15 files ±0 15 suites ±0 7h 22m 57s ⏱️ + 19m 9s For more details on these failures, see this check. Results for commit f840a93. ± Comparison against base commit 63cdddd. ♻️ This comment has been updated with latest results. |
| name=self.name, client=self.client.id | ||
| ) | ||
|
|
||
| def is_set(self, **kwargs): |
There was a problem hiding this comment.
Why the kwargs? They are forwarded to _is_set and will raise there.
There was a problem hiding this comment.
@fjetter Thanks for the review.
It's a good question. I'm not sure what that pattern is about. I noticed that the get and set variants do that pattern, and would cause the same error.
But yeah, also not too sure, I just went with following the established pattern. Happy to remove. Let me know.
There was a problem hiding this comment.
Yes, please remove it. For get we can at least pass timeout through but a cleaner way would be to be explicit. For set there is no excuse and we should remove it (not in this PR)
There was a problem hiding this comment.
Ok, seems strange. Maybe a byproduct from a past implementation. Anyway, I've pushed with the kwargs removed from the is_set signature. Thanks
|
Hiya, just wondering if there's anything else I need to do on this pull request. Some of the tests were failing when last run, but I believe these were not associated with the changes made in this PR. Is it possible to manually prompt the CI to rerun the tests? |
|
Sorry for letting this sit for so long. I just merged main to see if CI is happy and will merge upon green-ish CI. Thank you! |
No problem, thanks for pushing this though. Has also been on my todo list. Hopefully the CI goes well 🤞 |
|
There is a related failure in ________________________________ test_variable _________________________________
c = <Client: No scheduler connected>
s = <Scheduler 'tcp://127.0.0.1:41463', workers: 0, cores: 0, tasks: 0>
a = <Worker 'tcp://127.0.0.1:45705', name: 0, status: closed, stored: 0, running: 0/1, ready: 0, comm: 0, waiting: 0>
b = <Worker 'tcp://127.0.0.1:36901', name: 1, status: closed, stored: 0, running: 0/2, ready: 0, comm: 0, waiting: 0>
@gen_cluster(client=True)
async def test_variable(c, s, a, b):
x = Variable("x")
xx = Variable("x")
assert x.client is c
future = c.submit(inc, 1)
assert not await x.is_set()
await x.set(future)
assert await x.is_set()
future2 = await xx.get()
assert future.key == future2.key
del future, future2
await asyncio.sleep(0.1)
assert s.tasks # future still present
x.delete()
> assert not await x.is_set()
E assert not True
distributed/tests/test_variable.py:35: AssertionError |
|
I've taken a look at this, and it's a race condition. i.e. adding a I believe the solution would be to make Alternatively, it maybe possible to add either an additional Or, potentially, it might be possible to have that wait somehow in the I'm not sure if that last option is possible in the architecture, it might explain why Think I may need some help with this one. |
pre-commit run --all-files