Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Torch not compiled with CUDA enabled #33

Open
ahernandezmiro opened this issue Jun 7, 2024 · 6 comments
Open

Torch not compiled with CUDA enabled #33

ahernandezmiro opened this issue Jun 7, 2024 · 6 comments

Comments

@ahernandezmiro
Copy link

ahernandezmiro commented Jun 7, 2024

I am trying to test the tool but after running the installation process it does not work when I click on generate.
In the console an exception is raised on cuda initialization (Torch not compiled with CUDA enabled)
My card is a 3080 with 10GB.
Below I attach the full log after running gradio app

(tooncrafter) C:\Users\user\dev\ToonCrafter>python gradio_app.py
WARNING[XFORMERS]: xFormers can't load C++/CUDA extensions. xFormers was built for:
    PyTorch 2.0.0+cu118 with CUDA 1108 (you have 2.0.0+cpu)
    Python  3.8.10 (you have 3.8.5)
  Please reinstall xformers (see https://github.com/facebookresearch/xformers#installing-xformers)
  Memory-efficient attention, SwiGLU, sparse and more won't be available.
  Set XFORMERS_MORE_DETAILS=1 for more details
AE working on z of shape (1, 4, 32, 32) = 4096 dimensions.
checkpoints/tooncrafter_512_interp_v1/model.ckpt
>>> model checkpoint loaded.
Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
Global seed set to 123
start: talking man 2024-06-07 23:46:38
Traceback (most recent call last):
  File "C:\Users\user\miniconda3\envs\tooncrafter\lib\site-packages\gradio\queueing.py", line 532, in process_events
    response = await route_utils.call_process_api(
  File "C:\Users\user\miniconda3\envs\tooncrafter\lib\site-packages\gradio\route_utils.py", line 276, in call_process_api
    output = await app.get_blocks().process_api(
  File "C:\Users\user\miniconda3\envs\tooncrafter\lib\site-packages\gradio\blocks.py", line 1923, in process_api
    result = await self.call_function(
  File "C:\Users\user\miniconda3\envs\tooncrafter\lib\site-packages\gradio\blocks.py", line 1509, in call_function
    prediction = await anyio.to_thread.run_sync(
  File "C:\Users\user\miniconda3\envs\tooncrafter\lib\site-packages\anyio\to_thread.py", line 56, in run_sync
    return await get_async_backend().run_sync_in_worker_thread(
  File "C:\Users\user\miniconda3\envs\tooncrafter\lib\site-packages\anyio\_backends\_asyncio.py", line 2177, in run_sync_in_worker_thread
    return await future
  File "C:\Users\user\miniconda3\envs\tooncrafter\lib\site-packages\anyio\_backends\_asyncio.py", line 859, in run
    result = context.run(func, *args)
  File "C:\Users\user\miniconda3\envs\tooncrafter\lib\site-packages\gradio\utils.py", line 832, in wrapper
    response = f(*args, **kwargs)
  File "C:\Users\user\dev\ToonCrafter\scripts\gradio\i2v_test_application.py", line 51, in get_image
    model = model.cuda()
  File "C:\Users\user\miniconda3\envs\tooncrafter\lib\site-packages\lightning_fabric\utilities\device_dtype_mixin.py", line 69, in cuda
    device = torch.device("cuda", torch.cuda.current_device())
  File "C:\Users\user\miniconda3\envs\tooncrafter\lib\site-packages\torch\cuda\__init__.py", line 674, in current_device
    _lazy_init()
  File "C:\Users\user\miniconda3\envs\tooncrafter\lib\site-packages\torch\cuda\__init__.py", line 239, in _lazy_init
    raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled
@HyperUpscale
Copy link

I am trying on the same... but something doesn't add up for me.

this will give you the torch with cuda

pip install torch==2.0.0+cu117 torchvision==0.15.0+cu117 torchaudio==2.0.0+cu117 -f https://download.pytorch.org/whl/cu117/torch_stable.html

pip install transformers==4.25.1
AND
pip install -U xformers --index-url https://download.pytorch.org/whl/cu20

That completes the setup BUT every time I run the generate my PC freezes and I need to hard reboot.

I suspect 2 reasons, but can't confirm yet: Nvidia driver 555.85 and potentially Windows (because Triton, which is another GPU optimizer, that doesn't support Windows )

I will try on WSL.

@HyperUpscale
Copy link

wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh && bash Miniconda3-latest-Linux-x86_64.sh

eval "$(/home/YOUR-USERNAME/miniconda3/bin/conda shell.bash hook)" && conda init && source ~/.bashrc

conda create -n tooncrafter python=3.8.5 && conda activate tooncrafter

sudo apt update && sudo apt install nvidia-cuda-toolkit

git clone https://github.com/ToonCrafter/ToonCrafter.git && cd ToonCrafter

pip install -r requirements.txt

python gradio_app.py

@HyperUpscale
Copy link

Also ...
Keep in mind, I just saw - A100 is required with the default model...

#28

@zafirexd
Copy link

zafirexd commented Jun 8, 2024

I dont know why but xformers dont work for me, so i kill it, and i see the ToonCrafter decode uses xformers and thats why i prefer to use the implementation for comfy ui.
https://github.com/kijai/ComfyUI-DynamiCrafterWrapper

#28

@PladsElsker
Copy link

PladsElsker commented Jun 14, 2024

You need to install the GPU version of pytorch. This is a common issue with pytorch, and reinstalling pytorch on the gpu instead of the cpu will likely resolve this specific issue.

Reading from https://pytorch.org/, you are on windows, so you can probably run (not tested):

python -m pip install torch=2.0.0 torchvision --index-url https://download.pytorch.org/whl/cu118

You might need to find the version for torchvision as well, I don't remember which version was compatible with torch==2.0.0 on python 3.8.

That being said, I think you'll have an easier time running the pipeline on linux (or wsl).

@vivyhasadream
Copy link

I encountered the same bug while reinstalling xformers, which consistently conflicted with torch, torchaudio, and torchvision. To resolve this, I used the following command:

pip install xformers==0.0.22.post2 --index-url https://download.pytorch.org/whl/cu118

I also reinstalled torch, torchaudio, and torchvision, or they might have been installed during the xformers installation. I can't recall precisely. For reference, here is the final combination that worked for me (though it still has errors while generating videos, but at least it runs):

  • torch: 2.1.0+cu118i
  • torchaudio: 2.0.2+cu118
  • torchvision: 0.15.2+cu118
  • xformers: 0.0.22.post2+cu118

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants