-
Notifications
You must be signed in to change notification settings - Fork 10.5k
Description
Please help if you can. I am trying to Install & Use Stable Diffusion on Windows.
Here is the cmd script.
Creating venv in directory venv using python "C:\Users_____\AppData\Local\Programs\Python\Python310\python.exe"
venv "C:\stable-diffusion\stable-diffusion-webui\venv\Scripts\Python.exe"
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Commit hash: 4af3ca5393151d61363c30eef4965e694eeac15e
Installing torch and torchvision
Installing gfpgan
Installing clip
Installing open_clip
Cloning Stable Diffusion into repositories\stable-diffusion-stability-ai...
Cloning Taming Transformers into repositories\taming-transformers...
Cloning K-diffusion into repositories\k-diffusion...
Cloning CodeFormer into repositories\CodeFormer...
Cloning BLIP into repositories\BLIP...
Installing requirements for CodeFormer
Installing requirements for Web UI
Launching Web UI with arguments:
C:\stable-diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\cuda_init_.py:123: UserWarning:
Found GPU0 GeForce GTX 760 which is of cuda capability 3.0.
PyTorch no longer supports this GPU because it is too old.
The minimum cuda capability supported by this library is 3.7.
warnings.warn(old_gpu_warn % (d, name, major, minor, min_arch // 10, min_arch % 10))
No module 'xformers'. Proceeding without it.
LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 859.52 M params.
Downloading: 100%|██████████████████████████████████████████████████████████████████| 939k/939k [00:00<00:00, 1.33MB/s]
Downloading: 100%|██████████████████████████████████████████████████████████████████| 512k/512k [00:00<00:00, 1.09MB/s]
Downloading: 100%|█████████████████████████████████████████████████████████████████████| 389/389 [00:00<00:00, 403kB/s]
Downloading: 100%|█████████████████████████████████████████████████████████████████████| 905/905 [00:00<00:00, 935kB/s]
Downloading: 100%|████████████████████████████████████████████████████████████████| 4.41k/4.41k [00:00<00:00, 4.66MB/s]
Downloading: 100%|████████████████████████████████████████████████████████████████| 1.59G/1.59G [08:59<00:00, 3.17MB/s]
Loading weights [81761151] from C:\stable-diffusion\stable-diffusion-webui\models\Stable-diffusion\v1-5-pruned-emaonly.ckpt
Traceback (most recent call last):
File "C:\stable-diffusion\stable-diffusion-webui\launch.py", line 295, in
start()
File "C:\stable-diffusion\stable-diffusion-webui\launch.py", line 290, in start
webui.webui()
File "C:\stable-diffusion\stable-diffusion-webui\webui.py", line 133, in webui
initialize()
File "C:\stable-diffusion\stable-diffusion-webui\webui.py", line 63, in initialize
modules.sd_models.load_model()
File "C:\stable-diffusion\stable-diffusion-webui\modules\sd_models.py", line 318, in load_model
sd_model.to(shared.device)
File "C:\stable-diffusion\stable-diffusion-webui\venv\lib\site-packages\pytorch_lightning\core\mixins\device_dtype_mixin.py", line 113, in to
return super().to(*args, **kwargs)
File "C:\stable-diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 927, in to
return self._apply(convert)
File "C:\stable-diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 579, in _apply
module._apply(fn)
File "C:\stable-diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 579, in _apply
module._apply(fn)
File "C:\stable-diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 579, in _apply
module._apply(fn)
[Previous line repeated 4 more times]
File "C:\stable-diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 602, in _apply
param_applied = fn(param)
File "C:\stable-diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 925, in convert
return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
RuntimeError: CUDA out of memory. Tried to allocate 22.00 MiB (GPU 0; 2.00 GiB total capacity; 1.44 GiB already allocated; 19.26 MiB free; 1.50 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
Press any key to continue . . .