Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

getting the following cuda error #1

Closed
gsgoldma opened this issue Dec 23, 2022 · 10 comments
Closed

getting the following cuda error #1

gsgoldma opened this issue Dec 23, 2022 · 10 comments

Comments

@gsgoldma
Copy link

Traceback:

File "D:\stable-karlo.env\lib\site-packages\streamlit\runtime\scriptrunner\script_runner.py", line 565, in _run_script
exec(code, module.dict)
File "D:\stable-karlo\app.py", line 105, in
main()
File "D:\stable-karlo\app.py", line 69, in main
images = generate(
File "D:\stable-karlo\model\generate.py", line 68, in generate
pipe = make_pipe()
File "D:\stable-karlo.env\lib\site-packages\streamlit\runtime\legacy_caching\caching.py", line 625, in wrapped_func
return get_or_create_cached_value()
File "D:\stable-karlo.env\lib\site-packages\streamlit\runtime\legacy_caching\caching.py", line 609, in get_or_create_cached_value
return_value = non_optional_func(*args, **kwargs)
File "D:\stable-karlo\model\generate.py", line 41, in make_pipe
return pipe.to("cuda")
File "D:\stable-karlo.env\lib\site-packages\diffusers\pipeline_utils.py", line 270, in to
module.to(torch_device)
File "D:\stable-karlo.env\lib\site-packages\torch\nn\modules\module.py", line 989, in to
return self._apply(convert)
File "D:\stable-karlo.env\lib\site-packages\torch\nn\modules\module.py", line 641, in _apply
module._apply(fn)
File "D:\stable-karlo.env\lib\site-packages\torch\nn\modules\module.py", line 641, in _apply
module._apply(fn)
File "D:\stable-karlo.env\lib\site-packages\torch\nn\modules\module.py", line 664, in apply
param_applied = fn(param)
File "D:\stable-karlo.env\lib\site-packages\torch\nn\modules\module.py", line 987, in convert
return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
File "D:\stable-karlo.env\lib\site-packages\torch\cuda_init
.py", line 221, in _lazy_init
raise AssertionError("Torch not compiled with CUDA enabled")

@kpthedev
Copy link
Owner

It seems like your Pytorch installation doesn't have CUDA enabled. You can check on the Pytorch website for how to install it with CUDA enabled for your system.

On Windows, after the source step, try running this:

pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu117

@gsgoldma
Copy link
Author

I tried that but still receive the error. chatgpt recommended i install cuda toolkit, but that has no effect. i also get this at the beginning:
text_encoder\model.safetensors not found
Fetching 26 files: 100%|##########| 26/26 [00:00<00:00, 1663.92it/s]

@andybak
Copy link

andybak commented Dec 24, 2022

Same here. Fix doesn't work either.

@kpthedev
Copy link
Owner

Are you sure you have an Nvidia GPU that is compatible with CUDA? Perhaps you don't have the CUDA toolkit installed.

Here is a link to download the CUDA toolkit: https://developer.nvidia.com/cuda-downloads

After you've set up CUDA, go into the stable-karlo folder, activate the environment, and try this:

  1. pip install -r requirements.txt
  2. pip install --upgrade --force-reinstall torch --extra-index-url https://download.pytorch.org/whl/cu117

That should force Pytorch to rebuild with CUDA support.

@andybak
Copy link

andybak commented Dec 24, 2022 via email

@kpthedev
Copy link
Owner

@andybak Are you also running it on Windows? Do you think you could attach the error message you're getting?

@gsgoldma
Copy link
Author

i actually got the error resolved. i am on windows. i screwed up by making my own env instead of using the .env to install the packages. unforunately, got an oom error, so it seems like i don't have enough vram. i guess ill wait till it gets optimized more

@andybak
Copy link

andybak commented Dec 29, 2022 via email

@gsgoldma
Copy link
Author

gsgoldma commented Dec 29, 2022

the bash command wasn't being recognized on my pc, even though i was doing it in gitbash. i was probably doing something wrong. the code is below.

git clone https://github.com/kpthedev/stable-karlo.git
cd stable-karlo
python -m venv .env
source .env/bin/activate <---- changed this for conda
pip install -r requirements.txt

so i just used conda/command line and chatpgpt told me to just use .env/scripts/activate.bat so i could activate the env .env to install the requirements, then it worked. i may have had to force install the torch from the comment earlier, but i don't remember if that was actually necessary.

@kpthedev
Copy link
Owner

the bash command wasn't being recognized on my pc, even though i was doing it in gitbash.

Yeah, I got a chance to test on Windows and I had to use the activate.bat with the torch reinstall as you said.

As for the OOM errors, you can try the cpu-offloading branch. I was able to generate Karlo images with 8GB of VRAM, but the upscaling requires way more.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants