Skip to content

Commit

Permalink
Adapt to rapidsai/rmm#1221 which moves allocator callbacks (#1129)
Browse files Browse the repository at this point in the history
The allocator callbacks now live in their own submodules (so that RMM does not, for example, import pytorch unless required) and so must be explicitly imported.

Authors:
  - Lawrence Mitchell (https://github.com/wence-)
  - Peter Andreas Entschev (https://github.com/pentschev)

Approvers:
  - Peter Andreas Entschev (https://github.com/pentschev)

URL: #1129
  • Loading branch information
wence- committed Feb 28, 2023
1 parent 7c0bde1 commit b9561cf
Showing 1 changed file with 2 additions and 1 deletion.
3 changes: 2 additions & 1 deletion dask_cuda/benchmarks/utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -364,6 +364,7 @@ def setup_memory_pool(
import cupy

import rmm
from rmm.allocators.cupy import rmm_cupy_allocator

from dask_cuda.utils import get_rmm_log_file_name

Expand All @@ -380,7 +381,7 @@ def setup_memory_pool(
logging=logging,
log_file_name=get_rmm_log_file_name(dask_worker, logging, log_directory),
)
cupy.cuda.set_allocator(rmm.rmm_cupy_allocator)
cupy.cuda.set_allocator(rmm_cupy_allocator)
if statistics:
rmm.mr.set_current_device_resource(
rmm.mr.StatisticsResourceAdaptor(rmm.mr.get_current_device_resource())
Expand Down

0 comments on commit b9561cf

Please sign in to comment.