Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unexpected difference torch.multiprocessing.manager.queue and torch.multiprocessing.queue #30401

Open
shayben opened this issue Nov 25, 2019 · 5 comments
Labels
module: cuda Related to torch.cuda, and CUDA support in general module: multiprocessing Related to torch.multiprocessing triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module

Comments

@shayben
Copy link

shayben commented Nov 25, 2019

It seems that torch.multiprocessing.Manager.Queue doesn't support multiprocessor sharing of cuda tensors.
Running the repro below throws the exception:

RuntimeError: Attempted to send CUDA tensor received from another process; this is not currently supported. Consider cloning before sending.

import torch
import torch.multiprocessing as mp


def produce(in_Q, out_Q):
    while True:
        t = in_Q.get()
        v = torch.tensor(t, device='cuda')
        out_Q.put(v)

def consume(out_Q):
    res = out_Q.get()
    return res

def main():
    if False: #-----------Change this to True for the expected behavior queue --------------------
        qsrc = mp
    else:
        manager = mp.Manager()
        qsrc = manager
    
    in_Q = qsrc.Queue(10)
    out_Q = qsrc.Queue(10)

    consumer = mp.Process(target=consume, args=[out_Q])
    consumer.start()
    producer = mp.Process(target=produce, args=(in_Q, out_Q))
    producer.start()
    in_Q.put(1)
    consumer.join()
    print('done.')

if __name__ == "__main__":
    main()

cc @ngimel

@izdeby izdeby added module: multiprocessing Related to torch.multiprocessing module: cuda Related to torch.cuda, and CUDA support in general triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module labels Nov 26, 2019
@michelgokan
Copy link

Any suggestions on possible alternatives?

@berooo
Copy link

berooo commented Nov 12, 2020

have you solved it?

@marvelous-melanie
Copy link

Hi, I'm wondering if there are any updates on this issue?

@AOTY-szy
Copy link

have you solved it?

@btalberg
Copy link

btalberg commented Nov 3, 2023

I'm struggling with the same issue using torch 2.1.0

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
module: cuda Related to torch.cuda, and CUDA support in general module: multiprocessing Related to torch.multiprocessing triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module
Projects
None yet
Development

No branches or pull requests

7 participants