Skip to content

Commit

Permalink
Adapt to rapidsai/rmm#1221 which moves allocator callbacks
Browse files Browse the repository at this point in the history
The allocator callbacks now live in their own submodules (so that RMM
does not, for example, import pytorch unless required) and so must be
explicitly imported.
  • Loading branch information
wence- committed Feb 24, 2023
1 parent 3e26149 commit 67e7082
Showing 1 changed file with 3 additions and 1 deletion.
4 changes: 3 additions & 1 deletion src/distributed_merge/cudf_merge.py
Original file line number Diff line number Diff line change
Expand Up @@ -291,6 +291,8 @@ def __getattr__(self, name):
@nvtx.annotate(domain="MERGE")
def initialize_rmm(device: int):
# Work around cuda-python initialization bugs
from rmm.allocators.cupy import rmm_cupy_allocator

_, dev = cudart.cudaGetDevice()
cuda.cuDevicePrimaryCtxRelease(dev)
cuda.cuDevicePrimaryCtxReset(dev)
Expand All @@ -303,7 +305,7 @@ def initialize_rmm(device: int):
managed_memory=False,
devices=device,
)
cp.cuda.set_allocator(rmm.rmm_cupy_allocator)
cp.cuda.set_allocator(rmm_cupy_allocator)


@nvtx.annotate(domain="MERGE")
Expand Down

0 comments on commit 67e7082

Please sign in to comment.