Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

"cpu" or cpu not supported as device in settings_eval.ini #19

Open
dscarmo opened this issue Nov 25, 2019 · 2 comments
Open

"cpu" or cpu not supported as device in settings_eval.ini #19

dscarmo opened this issue Nov 25, 2019 · 2 comments

Comments

@dscarmo
Copy link

dscarmo commented Nov 25, 2019

When i set device to "cpu" in settings_eval.ini, just to test cpu performance, i get:

Traceback (most recent call last):
File "run.py", line 187, in
evaluate_bulk(settings_eval['EVAL_BULK'])
File "run.py", line 136, in evaluate_bulk
mc_samples)
File "/home/diedre/git/quickNAT_pytorch/utils/evaluator.py", line 260, in evaluate
model.cuda(device)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 311, in cuda
return self._apply(lambda t: t.cuda(device))
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 208, in _apply
module._apply(fn)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 208, in _apply
module._apply(fn)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 230, in _apply
param_applied = fn(param)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 311, in
return self._apply(lambda t: t.cuda(device))
RuntimeError: Invalid device, must be cuda device

Seems like an easy fix, change of using model.cuda(device) to model.to(device), with device being redefined as something like:
torch.device("cpu") if device == "cpu" else torch.device("cuda:{}".format(device))

I also tested cpu, and got:

Traceback (most recent call last):
File "run.py", line 186, in
settings_eval = Settings('settings_eval.ini')
File "/home/diedre/git/quickNAT_pytorch/settings.py", line 10, in init
self.settings_dict = _parse_values(config)
File "/home/diedre/git/quickNAT_pytorch/settings.py", line 27, in _parse_values
config_parsed[section][key] = ast.literal_eval(value)
File "/usr/lib/python3.6/ast.py", line 85, in literal_eval
return _convert(node_or_string)
File "/usr/lib/python3.6/ast.py", line 84, in _convert
raise ValueError('malformed node or string: ' + repr(node))
ValueError: malformed node or string: <_ast.Name object at 0x7f13134c55c0>

The code probably expects an int or string.

@AndreiRoibu
Copy link

AndreiRoibu commented Nov 27, 2019

I am getting a similar error, when trying to run the eval_bulk command. The error is:

RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU.

Following community suggestions, I modify line 33 in run.py to the below, but I am still getting the same error.

quicknat_model = torch.load(train_params['pre_trained_path'], map_location=torch.device('cpu'))

@joshicola
Copy link

joshicola commented Dec 13, 2019

Altering utils/evaluator.py gives the desired functionality (starting line 256):
`
cuda_available = torch.cuda.is_available()

if cuda_available:

    model = torch.load(coronal_model_path)

    torch.cuda.empty_cache()

    model.cuda(device)
else:
    model = torch.load(coronal_model_path, map_location=torch.device('cpu'))

`

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants