Skip to content
This repository has been archived by the owner on Jun 21, 2023. It is now read-only.

CUDA Does not appear to be working in docker container #50

Open
LargoUsagi opened this issue Jan 18, 2022 · 0 comments
Open

CUDA Does not appear to be working in docker container #50

LargoUsagi opened this issue Jan 18, 2022 · 0 comments

Comments

@LargoUsagi
Copy link

Running the latest docker container with the nvidia container runtime nvidia-smi returns and shows the graphics card as available and ready.

image

You can run larynx from the command line inside of the container without error.
image

But as soon as you pass the cuda flag in

^C(.venv) root@larynx-dd4858485-t9dj2:/home/larynx/app/larynx# python -m larynx --cuda
Traceback (most recent call last):
  File "/usr/lib/python3.9/runpy.py", line 197, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/usr/lib/python3.9/runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "/home/larynx/app/larynx/__main__.py", line 750, in <module>
    main()
  File "/home/larynx/app/larynx/__main__.py", line 66, in main
    import torch
ModuleNotFoundError: No module named 'torch'

Similar errors occur if you attempt to start the container with the cuda flag as an additional argument.

By executing into the container and using the venv that exists I was able to install torch and then run the command.

image

I believe the build container has an issue here https://github.com/rhasspy/larynx/blob/master/Dockerfile#L42 as my knowledge of python is limited it appears that the intent is to use a precompiled version of torch that you are providing, but it does not appear to actually be making it into the container.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant