New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
DataParallel is not compatible with pack_padded_sequence #2312
Comments
Can you please provide minimum reproducer? |
@ZiJianZhao still waiting on a response. |
Hi, I have the same error (there's also this issue #1591). The code below works on one GPU ( import numpy as np
import torch
from torch.autograd import Variable
class RNNDataParallel(torch.nn.Module):
def __init__(self):
super(RNNDataParallel, self).__init__()
def forward(self, inputs, lengths):
packed = torch.nn.utils.rnn.pack_padded_sequence(inputs, lengths, batch_first=True)
return packed
model = RNNDataParallel()
model = torch.nn.DataParallel(model)
model = model.cuda()
inputs = Variable(torch.from_numpy(np.array([
[1, 2, 3],
[4, 5, 0],
])))
lengths = [3, 2]
packed = model(inputs, lengths)
print(packed) My PyTorch version is 0.2.0+e02f7bf |
I encountered the same issue as @jgc128 . EDIT: I think the issue is that DataParallel does not do slice CPU data like the EDIT2: I "fixed" this by transforming the |
|
@jekbradbury Not on my build (conda pytorch 0.2.0). Even so, the issue lies within |
fixing compiler warning fixing lintrunner entries
If the model has
pack_padded_sequence
, then withDataParallel
module it will output error"ValueError: lengths array has incorrect size"
The text was updated successfully, but these errors were encountered: