Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issues about running gan_toy.py #6

Closed
yscacaca opened this issue May 27, 2017 · 2 comments
Closed

Issues about running gan_toy.py #6

yscacaca opened this issue May 27, 2017 · 2 comments

Comments

@yscacaca
Copy link

Hi,

I'm trying running the gan_toy.py without any modifications. I use the master version of pytorch after commit #1507. However, there are some errors when I am running the code,

`Traceback (most recent call last):
File "gan_toy.py", line 270, in
gradient_penalty.backward()
File "/usr/local/lib/python2.7/dist-packages/torch/autograd/variable.py", line 145, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph, retain_variables)
File "/usr/local/lib/python2.7/dist-packages/torch/autograd/init.py", line 98, in backward
variables, grad_variables, retain_graph)
File "/usr/local/lib/python2.7/dist-packages/torch/autograd/function.py", line 90, in apply
return self._forward_cls.backward(self, args)
File "/usr/local/lib/python2.7/dist-packages/torch/nn/_functions/linear.py", line 23, in backward
grad_input = torch.mm(grad_output, weight)
File "/usr/local/lib/python2.7/dist-packages/torch/autograd/variable.py", line 531, in mm
return self._static_blas(Addmm, (output, 0, 1, self, matrix), False)
File "/usr/local/lib/python2.7/dist-packages/torch/autograd/variable.py", line 524, in _static_blas
return cls.apply(
(args[:1] + args[-2:] + (alpha, beta, inplace)))
File "/usr/local/lib/python2.7/dist-packages/torch/autograd/_functions/blas.py", line 24, in forward
matrix1, matrix2, out=output)
TypeError: torch.addmm received an invalid combination of arguments - got (int, torch.cuda.ByteTensor, int, torch.cuda.ByteTensor, torch.cuda.FloatTensor, out=torch.cuda.ByteTensor), but expected one of:

  • (torch.cuda.ByteTensor source, torch.cuda.ByteTensor mat1, torch.cuda.ByteTensor mat2, *, torch.cuda.ByteTensor out)
  • (torch.cuda.ByteTensor source, torch.cuda.sparse.ByteTensor mat1, torch.cuda.ByteTensor mat2, *, torch.cuda.ByteTensor out)
  • (int beta, torch.cuda.ByteTensor source, torch.cuda.ByteTensor mat1, torch.cuda.ByteTensor mat2, *, torch.cuda.ByteTensor out)
  • (torch.cuda.ByteTensor source, int alpha, torch.cuda.ByteTensor mat1, torch.cuda.ByteTensor mat2, *, torch.cuda.ByteTensor out)
  • (int beta, torch.cuda.ByteTensor source, torch.cuda.sparse.ByteTensor mat1, torch.cuda.ByteTensor mat2, *, torch.cuda.ByteTensor out)
  • (torch.cuda.ByteTensor source, int alpha, torch.cuda.sparse.ByteTensor mat1, torch.cuda.ByteTensor mat2, *, torch.cuda.ByteTensor out)
  • (int beta, torch.cuda.ByteTensor source, int alpha, torch.cuda.ByteTensor mat1, torch.cuda.ByteTensor mat2, *, torch.cuda.ByteTensor out)
    didn't match because some of the arguments have invalid types: (int, torch.cuda.ByteTensor, int, torch.cuda.ByteTensor, torch.cuda.FloatTensor, out=torch.cuda.ByteTensor)
  • (int beta, torch.cuda.ByteTensor source, int alpha, torch.cuda.sparse.ByteTensor mat1, torch.cuda.ByteTensor mat2, *, torch.cuda.ByteTensor out)
    didn't match because some of the arguments have invalid types: (int, torch.cuda.ByteTensor, int, torch.cuda.ByteTensor, torch.cuda.FloatTensor, out=torch.cuda.ByteTensor)`

I'm wondering whether you have any ideas about the causes of this problem.

Thanks.

@caogang
Copy link
Owner

caogang commented May 27, 2017

Sorry, this is one of bug existing in pytorch.
So I can give you an fix to make this error clear.
You can change the source code, and recompile it. This error will be clear.

torch/nn/_functions/thnn/activation.py

         else:
+            mask = input > ctx.threshold
+            grad_input = mask.type_as(grad_output) * grad_output
-            grad_input = grad_output.masked_fill(input > ctx.threshold, 0)
         return grad_input, None, None, None

@yscacaca
Copy link
Author

It seems working now. Thanks for your help!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants