Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

I install pytorch 0.1.12+ac1c674 and it errors like this. #8

Closed
Naruto-Sasuke opened this issue Jun 4, 2017 · 3 comments
Closed

I install pytorch 0.1.12+ac1c674 and it errors like this. #8

Naruto-Sasuke opened this issue Jun 4, 2017 · 3 comments

Comments

@Naruto-Sasuke
Copy link

pytorch/pytorch@ac1c674

Traceback (most recent call last):
  File "gan_toy.py", line 270, in <module>
    gradient_penalty.backward()
  File "/home/yan/anaconda3/lib/python3.6/site-packages/torch/autograd/variable.py", line 151, in backward
    torch.autograd.backward(self, gradient, retain_graph, create_graph, retain_variables)
  File "/home/yan/anaconda3/lib/python3.6/site-packages/torch/autograd/__init__.py", line 98, in backward
    variables, grad_variables, retain_graph)
  File "/home/yan/anaconda3/lib/python3.6/site-packages/torch/autograd/function.py", line 90, in apply
    return self._forward_cls.backward(self, *args)
  File "/home/yan/anaconda3/lib/python3.6/site-packages/torch/nn/_functions/linear.py", line 23, in backward
    grad_input = torch.mm(grad_output, weight)
  File "/home/yan/anaconda3/lib/python3.6/site-packages/torch/autograd/variable.py", line 539, in mm
    return self._static_blas(Addmm, (output, 0, 1, self, matrix), False)
  File "/home/yan/anaconda3/lib/python3.6/site-packages/torch/autograd/variable.py", line 532, in _static_blas
    return cls.apply(*(args[:1] + args[-2:] + (alpha, beta, inplace)))
  File "/home/yan/anaconda3/lib/python3.6/site-packages/torch/autograd/_functions/blas.py", line 24, in forward
    matrix1, matrix2, out=output)
TypeError: torch.addmm received an invalid combination of arguments - got (int, torch.cuda.ByteTensor, int, torch.cuda.ByteTensor, torch.cuda.FloatTensor, out=torch.cuda.ByteT
ensor), but expected one of:
 * (torch.cuda.ByteTensor source, torch.cuda.ByteTensor mat1, torch.cuda.ByteTensor mat2, *, torch.cuda.ByteTensor out)
 * (torch.cuda.ByteTensor source, torch.cuda.sparse.ByteTensor mat1, torch.cuda.ByteTensor mat2, *, torch.cuda.ByteTensor out)
 * (int beta, torch.cuda.ByteTensor source, torch.cuda.ByteTensor mat1, torch.cuda.ByteTensor mat2, *, torch.cuda.ByteTensor out)
 * (torch.cuda.ByteTensor source, int alpha, torch.cuda.ByteTensor mat1, torch.cuda.ByteTensor mat2, *, torch.cuda.ByteTensor out)
 * (int beta, torch.cuda.ByteTensor source, torch.cuda.sparse.ByteTensor mat1, torch.cuda.ByteTensor mat2, *, torch.cuda.ByteTensor out)
 * (torch.cuda.ByteTensor source, int alpha, torch.cuda.sparse.ByteTensor mat1, torch.cuda.ByteTensor mat2, *, torch.cuda.ByteTensor out)
 * (int beta, torch.cuda.ByteTensor source, int alpha, torch.cuda.ByteTensor mat1, torch.cuda.ByteTensor mat2, *, torch.cuda.ByteTensor out)
      didn't match because some of the arguments have invalid types: (int, torch.cuda.ByteTensor, int, torch.cuda.ByteTensor, torch.cuda.FloatTensor, out=torch.cuda.ByteTensor
)
 * (int beta, torch.cuda.ByteTensor source, int alpha, torch.cuda.sparse.ByteTensor mat1, torch.cuda.ByteTensor mat2, *, torch.cuda.ByteTensor out)
      didn't match because some of the arguments have invalid types: (int, torch.cuda.ByteTensor, int, torch.cuda.ByteTensor, torch.cuda.FloatTensor, out=torch.cuda.ByteTensor
)

Any quick fix or just wait for the milestone when stable double backprop have been implemented?

@caogang
Copy link
Owner

caogang commented Jun 4, 2017

you can refer to the issue #6 to solve this problem

@santisy
Copy link

santisy commented Jun 14, 2017

@caogang Is this problem fixed on master branch of Pytorch? I encountered a different problem where costs soon became nan and it can also be fixed by your solution. Is there still something wired that has not been fixed about masked_fill?

@caogang
Copy link
Owner

caogang commented Jun 14, 2017

It will soon be fixed in the pytorch branch. Maybe i will test it in the future days.

@caogang caogang closed this as completed Aug 16, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants