Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error when switching between float and cuda #107

Closed
bamos opened this issue Mar 13, 2016 · 3 comments
Closed

Error when switching between float and cuda #107

bamos opened this issue Mar 13, 2016 · 3 comments

Comments

@bamos
Copy link

bamos commented Mar 13, 2016

Hi, consider the following example to illustrate this issue. It creates a gModule
and converts it to CUDA and back to a float.
After converting back to a float, I think some internal state isn't correctly updated
and I'm getting the error below.

I'm using the latest master branch of:

  • torch f62a95d3184fd730acdaa4754647b338d7686301
  • cutorch a7147d00e61a5e182a277995f5d1e99ec3bdf0f8
  • nn bc056eeb09f83aaba354d44b985b1819b6b6ee4a
  • cunn 3827fcd820d5d0d90cb37a443c403b47009cb7d4
  • nngraph d0c239b
require 'cutorch'
require 'nn'
require 'cunn'
require 'nngraph'
torch.manualSeed(1)
input = nn.Identity()()
L1 = nn.ReLU()(nn.Linear(3, 1)(input))
net = nn.Sequential()
net:add(L1)
g = nn.gModule({input}, {L1})
x = torch.randn(3)
g:forward(x)
g:cuda()
g:forward(x:cuda())
g:float()
g:forward(x)

Output

th> g:forward(x)
 0.1432
[torch.DoubleTensor of size 1]

                                                                      [0.0001s]
th> g:cuda()
nn.gModule
                                                                      [0.0596s]
th> g:forward(x:cuda())
 0.1432
[torch.CudaTensor of size 1]

                                                                      [0.0003s]
th> g:float()
nn.gModule
                                                                      [0.0004s]
th> g:forward(x)
/home/bamos/torch/install/share/lua/5.1/nn/Linear.lua:51: expected arguments: *FloatTensor~1D* [FloatTensor~1D] [float] FloatTensor~2D FloatTensor~1D | *FloatTensor~1D* float [FloatTensor~1D] float FloatTensor~2D FloatTensor~1D
stack traceback:
        [C]: in function 'addmv'
        /home/bamos/torch/install/share/lua/5.1/nn/Linear.lua:51: in function 'func'
        /home/bamos/torch/install/share/lua/5.1/nngraph/gmodule.lua:311: in function 'neteval'
        /home/bamos/torch/install/share/lua/5.1/nngraph/gmodule.lua:346: in function 'forward'
        [string "_RESULT={g:forward(x)}"]:1: in main chunk
        [C]: in function 'xpcall'
        /home/bamos/torch/install/share/lua/5.1/trepl/init.lua:630: in function 'repl'
        ...amos/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:185: in main chunk
        [C]: at 0x00406670
@fmassa
Copy link

fmassa commented Mar 17, 2016

@bamos the problem is that your input is double, whereas the network is float. Cast x to float and it will work just fine.

@bamos
Copy link
Author

bamos commented Mar 17, 2016

Thanks @fmassa! Sorry for the noise...

@bamos bamos closed this as completed Mar 17, 2016
@fmassa
Copy link

fmassa commented Mar 17, 2016

no problem ! :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants