Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Maskzero breaks with cuda #313

Closed
RicherMans opened this issue Aug 2, 2016 · 2 comments
Closed

Maskzero breaks with cuda #313

RicherMans opened this issue Aug 2, 2016 · 2 comments

Comments

@RicherMans
Copy link

Hey there, I recently updated my torch and rnn( including dpnn, torchx ) and after updating the maskzero layer does not seem to work. The following code demonstrates the problem:

require 'rnn'
require 'nn'

cuda = arg[1]

mdl = nn.Sequential()

mdl:add(nn.MaskZero(nn.Linear(10,20),1))
mdl:add(nn.ReLU())
mdl:add(nn.LogSoftMax())
inp = torch.rand(10,10)
if cuda == "cuda" then
    require 'cunn'
    mdl = mdl:cuda()
    inp = inp:cuda()
end
print(mdl:forward(inp))

When I run this script on the cpu, it works without any problem, but if I use it on the GPU, it crashes with:

MaskZero.lua:49: invalid arguments: CudaTensor CudaTensor number 
expected arguments: *CudaTensor* CudaByteTensor float
@nicholas-leonard
Copy link
Member

@RicherMans Good catch. Apparently, this cutorch commit torch/cutorch@20001ac breaks backwards compatibility. I am looking into it.

@nicholas-leonard
Copy link
Member

fixed by #319

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants