What happened:
When program is completed exceptions are thrown.
RuntimeError: cannot schedule new futures after interpreter shutdown
Please see the log file: test.log
What you expected to happen:
I expect scheduler and worker exit without exceptions.
Minimal Complete Verifiable Example:
To run this example I use the following command:
mpirun -n 4 python test.py
# Put your MCVE code here
# Put your MCVE code here
import logging
from dask_mpi import initialize
from distributed import Client
import dask
from dask.distributed import progress
from distributed.worker import logger as worker_log
from distributed.batched import logger as batched_log
def square(x):
return x ** 2
def main():
dask.config.set({
"distributed.worker.use-file-locking": True,
"logging.distributed": "error",
})
worker_log.setLevel(logging.CRITICAL)
batched_log.setLevel(logging.CRITICAL)
initialize(nthreads=1, dashboard=False)
with Client() as client:
futures = client.map(square, range(1000))
progress(futures)
results = client.gather(futures)
client.shutdown()
print(results[:40])
if __name__ == '__main__':
main()
Anything else we need to know?:
Environment:
- Dask version: 2022.05.0
- Python version: 3.20.0
- Operating System: Manjaro Linux
- Install method (conda, pip, source): conda
Cluster Dump State:
What happened:
When program is completed exceptions are thrown.
Please see the log file: test.log
What you expected to happen:
I expect scheduler and worker exit without exceptions.
Minimal Complete Verifiable Example:
To run this example I use the following command:
Anything else we need to know?:
Environment:
Cluster Dump State: