You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm trying to run some code that imports apex on my laptop (without a GPU) for debugging purposes, but the import throws an error. Version info and stacktrace below:
mac osx 10.13.6 (high sierra)
python 3.7.3
torch 1.1.0.post2
$ python3
Python 3.7.3 (default, Apr 9 2019, 13:13:38)
[Clang 10.0.0 (clang-1000.11.45.5)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import apex
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/kfsilvers/apex/apex/__init__.py", line 5, in <module>
from . import amp
File "/Users/kfsilvers/apex/apex/amp/__init__.py", line 1, in <module>
from .amp import init, half_function, float_function, promote_function,\
File "/Users/kfsilvers/apex/apex/amp/amp.py", line 3, in <module>
from .lists import functional_overrides, torch_overrides, tensor_overrides
File "/Users/kfsilvers/apex/apex/amp/lists/torch_overrides.py", line 77, in <module>
if utils.get_cuda_version() >= (9, 1, 0):
File "/Users/kfsilvers/apex/apex/amp/utils.py", line 9, in get_cuda_version
return tuple(int(x) for x in torch.version.cuda.split('.'))
AttributeError: 'NoneType' object has no attribute 'split'
Seems like the case where torch.version.cuda returns None isn't handled properly in two places:
apex/amp/utils.py
def get_cuda_version():
return tuple(int(x) for x in torch.version.cuda.split('.'))
Seems like we could change this to:
def get_cuda_version():
if torch.version.cuda is not None:
return tuple(int(x) for x in torch.version.cuda.split('.'))
else:
return None
apex/amp/lists/torch_overrides.py
if utils.get_cuda_version() >= (9, 1, 0):
FP16_FUNCS.extend(_bmms)
else:
FP32_FUNCS.extend(_bmms)
We'll also need to add a check for None here:
cuda_version = utils.get_cuda_version()
if cuda_version is not None:
if cuda_version >= (9, 1, 0):
FP16_FUNCS.extend(_bmms)
else:
FP32_FUNCS.extend(_bmms)
Here's where I confess I know nothing about apex. Should there still be a call to FP32_FUNCS.extend(_bmms) in the case where cuda_version is None i.e. there is no GPU available, or no?
The text was updated successfully, but these errors were encountered:
thanks for reporting this! amp is not usable without a GPU.
The reason for not throwing an error but just a warning, if no GPU was found during the setup, is that it might be sometimes useful for cross-compilation.
Anyway, I think the error message should be clearer in case you are trying to run amp without a GPU.
I'm trying to run some code that imports apex on my laptop (without a GPU) for debugging purposes, but the import throws an error. Version info and stacktrace below:
mac osx 10.13.6 (high sierra)
python 3.7.3
torch 1.1.0.post2
Seems like the case where
torch.version.cuda
returns None isn't handled properly in two places:Seems like we could change this to:
We'll also need to add a check for None here:
Here's where I confess I know nothing about apex. Should there still be a call to
FP32_FUNCS.extend(_bmms)
in the case where cuda_version is None i.e. there is no GPU available, or no?The text was updated successfully, but these errors were encountered: