Skip to content
This repository has been archived by the owner on Oct 31, 2023. It is now read-only.

RuntimeError: torch.cuda.FloatTensor is not enabled. #22

Closed
ehsanzare-zz opened this issue Sep 19, 2018 · 5 comments
Closed

RuntimeError: torch.cuda.FloatTensor is not enabled. #22

ehsanzare-zz opened this issue Sep 19, 2018 · 5 comments

Comments

@ehsanzare-zz
Copy link

Hi,

I get the following error when I ran the code on my Mac machine (no GPU):


INFO - 09/19/18 10:23:37 - 0:00:16 - ============ Building transformer attention model - Decoder ...
INFO - 09/19/18 10:23:37 - 0:00:16 - Sharing decoder input embeddings
INFO - 09/19/18 10:23:38 - 0:00:17 - Sharing decoder transformer parameters for layer 0
INFO - 09/19/18 10:23:38 - 0:00:17 - Sharing decoder transformer parameters for layer 1
INFO - 09/19/18 10:23:38 - 0:00:17 - Sharing decoder transformer parameters for layer 2
INFO - 09/19/18 10:23:38 - 0:00:18 - Sharing decoder projection matrices

Traceback (most recent call last):
File "main.py", line 242, in
encoder, decoder, discriminator, lm = build_mt_model(params, data)
File "/Users/Ehsan/Documents/Ehsan_General/HMQ/HMQ_Projects/UnSup_MT/UnsupervisedMT/NMT/src/model/init.py", line 98, in build_mt_model
return build_attention_model(params, data, cuda=cuda)
File "/Users/Ehsan/Documents/Ehsan_General/HMQ/HMQ_Projects/UnSup_MT/UnsupervisedMT/NMT/src/model/attention.py", line 801, in build_attention_model
encoder.cuda()
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/torch/nn/modules/module.py", line 249, in cuda
return self._apply(lambda t: t.cuda(device))
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/torch/nn/modules/module.py", line 176, in _apply
module._apply(fn)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/torch/nn/modules/module.py", line 176, in _apply
module._apply(fn)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/torch/nn/modules/module.py", line 182, in _apply
param.data = fn(param.data)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/torch/nn/modules/module.py", line 249, in
return self._apply(lambda t: t.cuda(device))
RuntimeError: torch.cuda.FloatTensor is not enabled.


Is there any way that I can run this code on CPU?

Regards

@glample
Copy link
Contributor

glample commented Sep 19, 2018

Hi,
People have tried to make it run on CPU, for instance https://github.com/facebookresearch/UnsupervisedMT/pull/9/files
Maybe you can have a look at this. Overall, removing all occurrences of .cuda() and replacing all occurrences of torch.cuda by torch should be enough.

@ehsanzare-zz
Copy link
Author

Done. I follow https://github.com/facebookresearch/UnsupervisedMT/pull/9/files. Moreover, I also change the line 209 to:
if torch.cuda.is_available():
sent1, sent3 = sent1.cuda(), sent3.cuda()
Tnx

@mohammedayub44
Copy link

@ehsanzare I'm trying to run the same experiments , Did you benchmark on different no of CPU's . Just curious if it is linearly scalable.

Cheers.

Mohammed Ayub

@ehsanzare-zz
Copy link
Author

No, I did not try that on no of CPU's.

@mohammedayub44
Copy link

@ehsanzare @glample I'm trying to run in in Windows, multiprocessing_event_loop.py file is giving error saying signal has no attribute SIGUR1 . Seems like Python multiprocessing is slightly different on Windows. For now, I have replaced all occurrences of SIGUR1 with SIGTERM and training seems to initiate fine. Not sure if this is the best solution to avoid this error.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants