Skip to content

Set GPU ThreadPoolExecutor and set known libraries to use it #5084

@mrocklin

Description

@mrocklin

With @madsbk 's recent work allowing for multiple executors, we might consider making a GPU ThreadPoolExecutor in workers by default if a GPU is detected, and then annotating tasks known to be GPU-targetted with that annotation. This would improve the likelihood that a user of vanilla dask has a good time with RAPIDS, cupy, or other known project.

We probably can't do this in full generality (it's hard to detect what code has GPUs) but we're no worse off if we don't catch something, and we can handle the common cases well.

Concretely, I propose:

  1. Having the Worker class try importing pynvml (or some future NVIDIA python library) and if it detects a GPU then create a single-threaded ThreadPoolExecutor

    try:
        import pynvml
    except ImportError:
        pass
    else:
        if pynvml.do_I_have_a_gpu():
            self.executors["gpu"] = ThreadPoolExecutor
  2. In known GPU libraries we would add an annotation to every layer

    class ArrayLayer:
        def __init__(self, ...):
            if "cupy" in str(type(self._meta)):
                self.annotations["executor"] = "gpu" # or use setdefault or something

cc @madsbk @quasiben @kkraus14

dask-cuda handles this for users who use it. This feels like something that we could upstream. This would also help with CPU-GPU mixed computation.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions