Skip to content

cleanup exceptions when running client and workers with mpirun #6302

@guziy

Description

@guziy

What happened:
When program is completed exceptions are thrown.

RuntimeError: cannot schedule new futures after interpreter shutdown

Please see the log file: test.log

What you expected to happen:
I expect scheduler and worker exit without exceptions.

Minimal Complete Verifiable Example:
To run this example I use the following command:

mpirun -n 4 python test.py
# Put your MCVE code here
# Put your MCVE code here
import logging

from dask_mpi import initialize
from distributed import Client
import dask
from dask.distributed import progress
from distributed.worker import logger as worker_log
from distributed.batched import logger as batched_log

def square(x):
    return x ** 2


def main():
    dask.config.set({
        "distributed.worker.use-file-locking": True,
        "logging.distributed": "error",
    })
    worker_log.setLevel(logging.CRITICAL)
    batched_log.setLevel(logging.CRITICAL)

    initialize(nthreads=1, dashboard=False)

    with Client() as client:
        futures = client.map(square, range(1000))
        progress(futures)
        results = client.gather(futures)
        client.shutdown()

    print(results[:40])


if __name__ == '__main__':
    main()

Anything else we need to know?:

Environment:

  • Dask version: 2022.05.0
  • Python version: 3.20.0
  • Operating System: Manjaro Linux
  • Install method (conda, pip, source): conda
Cluster Dump State:

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions