Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

"--no-cuda" does not work #4

Closed
KIVix opened this issue Mar 21, 2021 · 9 comments
Closed

"--no-cuda" does not work #4

KIVix opened this issue Mar 21, 2021 · 9 comments

Comments

@KIVix
Copy link

KIVix commented Mar 21, 2021

When using the --no-cuda argument, it returns an error.

(env) λ python pix2tex.py --no-cuda
Traceback (most recent call last):
  File "H:\pytlat\ocr\pix2tex.py", line 84, in <module>
    args, model, tokenizer = initialize(args)
  File "H:\pytlat\ocr\pix2tex.py", line 33, in initialize
    model.load_state_dict(torch.load(args.checkpoint))
  File "H:\pytlat\env\lib\site-packages\torch\serialization.py", line 594, in load
    return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args)
  File "H:\pytlat\env\lib\site-packages\torch\serialization.py", line 853, in _load
    result = unpickler.load()
  File "H:\pytlat\env\lib\site-packages\torch\serialization.py", line 845, in persistent_load
    load_tensor(data_type, size, key, _maybe_decode_ascii(location))
  File "H:\pytlat\env\lib\site-packages\torch\serialization.py", line 834, in load_tensor
    loaded_storages[key] = restore_location(storage, location)
  File "H:\pytlat\env\lib\site-packages\torch\serialization.py", line 175, in default_restore_location
    result = fn(storage, location)
  File "H:\pytlat\env\lib\site-packages\torch\serialization.py", line 151, in _cuda_deserialize
    device = validate_cuda_device(location)
  File "H:\pytlat\env\lib\site-packages\torch\serialization.py", line 135, in validate_cuda_device
    raise RuntimeError('Attempting to deserialize object on a CUDA '
RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU.

I use torch 1.7.+cpu, cuda version is not installed, and can't use cuda.

@KIVix KIVix changed the title No-cuda does not work "--no-cuda" does not work Mar 21, 2021
lukas-blecher added a commit that referenced this issue Mar 21, 2021
@lukas-blecher
Copy link
Owner

Thank you!
I missed an argument. It should work now.
Let me know if there are any other problems.

@lukas-blecher
Copy link
Owner

Feel free to reopen if the issue is not resolved.

@aleksandar-vuckovic
Copy link

aleksandar-vuckovic commented Jul 19, 2022

I am sorry if this is the wrong place to ask this, but I have a similar issue. When I start latexocr (GUI) or the command line program, I get the following error message:

UserWarning: Failed to load image Python extension: libc10_cuda.so: cannot open shared object file: No such file or directory
regardless if I start them with --no-cuda or not.

I should add that I installed the package via pip, did no additional training, and am running Arch with Python v3.10.5 .

@lukas-blecher
Copy link
Owner

Looks more like a broken installation to me. Can you confirm the pytorch installation is valid?
This particular issue is resolved, I believe.

@aleksandar-vuckovic
Copy link

aleksandar-vuckovic commented Jul 19, 2022

I think so, at least running to commands provided in https://pytorch.org/get-started/locally/ worked fine. For the GUI application, it starts while throwing this error message, but hangs when trying to snip a formula.

@lukas-blecher
Copy link
Owner

How about

import torch
print(torch.randn(2, device='cuda'))

and does the cli work?

@aleksandar-vuckovic
Copy link

aleksandar-vuckovic commented Jul 19, 2022

First of all, I want to say that I appreciate you answering this fast and helping me.
The python commands print

>>> import torch
>>> print(torch.randn(2, device='cuda'))
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/usr/lib/python3.10/site-packages/torch/cuda/__init__.py", line 211, in _lazy_init
    raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled

which is to be expected, as my GPU is an AMD R9 290 and not nvidia, which is why I tried the "--no-cuda" flag.
The CLI output is

 $ pix2tex -f Screenshot.png --no-cuda 
/home/username/.local/lib/python3.10/site-packages/torchvision/io/image.py:13: UserWarning: Failed to load image Python extension: libc10_cuda.so: cannot open shared object file: No such file or directory
  warn(f"Failed to load image Python extension: {e}")

Predict LaTeX code for image ("?"/"h" for help). 

It seemingly ignores that I already specified a file. The only thing which does not just print the prompt again is when I enter the absolute path of the file, where it segfaults.

@lukas-blecher
Copy link
Owner

No problem!
The thing with the file is a bug right now. I've discovered it a while back, but I'm currently not really allowed to commit to this repo.
If you don't have a nvidia GPU in your system, that is handeled automatically (--no-cuda isn't needed, but also doesn't change anything)

From the error message it looks like torchvision is the problem here.
Try to reinstall that package

@aleksandar-vuckovic
Copy link

Oh you're right, I don't know how I missed that torchvision was the problem.
The issue was that I had the Arch-package python-pytorch installed, while torch and torchvision were installed via pip. It seems that it tried to use the system-provided torch and the pip-provided torchvision.
Uninstalling the python-pytorch package and reinstalling torch and torchvision via pip fixed it, and everything seems to work now. Thanks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants