Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: SD XL model loading on AMD - Failed to create model quickly; will retry using slow method #11835

Open
1 task done
cerias opened this issue Jul 17, 2023 · 17 comments
Open
1 task done
Labels
bug-report Report of a bug, yet to be confirmed

Comments

@cerias
Copy link

cerias commented Jul 17, 2023

Is there an existing issue for this?

  • I have searched the existing issues and checked the recent builds/commits

What happened?

Trying to load the sd_xl_base_0.9.safetensors model ends with:

Creating model from config: /dockerx/repositories/generative-models/configs/inference/sd_xl_base.yaml
Failed to create model quickly; will retry using slow method.

The file is in the folder present and should be readable.

Inside an docker container on a linux maschine with an AMD card. Maybe i missed a step.

Steps to reproduce the problem

Load sd_xl_base_0.9.safetensors via UI.

What should have happened?

Model should be loaded

Version or Commit where the problem happens

14cf434

What Python version are you running on ?

Python 3.9.x (below, no recommended)

What platforms do you use to access the UI ?

Linux

What device are you running WebUI on?

AMD GPUs (RX 6000 above)

Cross attention optimization

Automatic

What browsers do you use to access the UI ?

Google Chrome

Command Line Arguments

TORCH_COMMAND='pip install  torch==1.13.1+rocm5.2 torchvision==0.14.1+rocm5.2 --index-url https://download.pytorch.org/whl/rocm5.2' python launch.py --precision full --no-half --lyco-dir /dockerx/models/lycoris/

List of extensions

PBRemTools https://github.com/mattyamonaca/PBRemTools.git main d0fee2a8 Thu Jun 29 14:08:23 2023 latest
Stable-Diffusion-Webui-Civitai-Helper https://github.com/butaixianran/Stable-Diffusion-Webui-Civitai-Helper.git main 920ca326 Tue May 23 11:53:22 2023 latest
a1111-sd-webui-lycoris https://github.com/KohakuBlueleaf/a1111-sd-webui-lycoris.git main 8e97bf54 Sun Jul 9 07:44:58 2023 latest
a1111-sd-webui-tagcomplete https://github.com/DominikDoom/a1111-sd-webui-tagcomplete.git main 737b697 Sat Jul 8 16:03:44 2023 latest
model_preset_manager https://github.com/rifeWithKaiju/model_preset_manager.git main 4e25eebd Tue May 30 15:07:37 2023 latest
sd-webui-controlnet https://github.com/Mikubill/sd-webui-controlnet.git main 07bed6c Sat Jul 8 20:54:01 2023 latest
stable-diffusion-webui-eyemask https://github.com/ilian6806/stable-diffusion-webui-eyemask.git master 7b803a43 Fri Jun 2 11:15:19 2023 latest

PBRemTools https://github.com/mattyamonaca/PBRemTools.git main d0fee2a8 Thu Jun 29 14:08:23 2023 latest
Stable-Diffusion-Webui-Civitai-Helper https://github.com/butaixianran/Stable-Diffusion-Webui-Civitai-Helper.git main 920ca326 Tue May 23 11:53:22 2023 latest
a1111-sd-webui-lycoris https://github.com/KohakuBlueleaf/a1111-sd-webui-lycoris.git main 8e97bf54 Sun Jul 9 07:44:58 2023 latest
a1111-sd-webui-tagcomplete https://github.com/DominikDoom/a1111-sd-webui-tagcomplete.git main 737b6973 Sat Jul 8 16:03:44 2023 latest
model_preset_manager https://github.com/rifeWithKaiju/model_preset_manager.git main 4e25eebd Tue May 30 15:07:37 2023 latest
sd-webui-controlnet https://github.com/Mikubill/sd-webui-controlnet.git main 07bed6cc Sat Jul 8 20:54:01 2023 latest
stable-diffusion-webui-eyemask https://github.com/ilian6806/stable-diffusion-webui-eyemask.git master 7b803a43 Fri Jun 2 11:15:19 2023 latest

Console logs

> Creating model from config: /dockerx/repositories/generative-models/configs/inference/sd_xl_base.yaml
> Failed to create model quickly; will retry using slow method.

Additional information

Its running in the following docker container image:

rocm/pytorch:rocm5.5_ubuntu20.04_py3.8_pytorch_1.13.1

@cerias cerias added the bug-report Report of a bug, yet to be confirmed label Jul 17, 2023
@KEDI103
Copy link

KEDI103 commented Jul 17, 2023

Well for gfx906 I get this error

Loading weights [13406f993c] from /media/bcansin/ai/ai/stable-diffusion-webui/models/Stable-diffusion/sd_xl_refiner_0.9/sd_xl_refiner_0.9.safetensors
Creating model from config: /media/bcansin/ai/ai/stable-diffusion-webui/repositories/generative-models/configs/inference/sd_xl_refiner.yaml
Failed to create model quickly; will retry using slow method.
changing setting sd_model_checkpoint to sd_xl_refiner_0.9/sd_xl_refiner_0.9.safetensors [13406f993c]: AssertionError
Traceback (most recent call last):
  File "/media/bcansin/ai/ai/stable-diffusion-webui/modules/shared.py", line 631, in set
    self.data_labels[key].onchange()
  File "/media/bcansin/ai/ai/stable-diffusion-webui/modules/call_queue.py", line 14, in f
    res = func(*args, **kwargs)
  File "/media/bcansin/ai/ai/stable-diffusion-webui/webui.py", line 238, in <lambda>
    shared.opts.onchange("sd_model_checkpoint", wrap_queued_call(lambda: modules.sd_models.reload_model_weights()), call=False)
  File "/media/bcansin/ai/ai/stable-diffusion-webui/modules/sd_models.py", line 578, in reload_model_weights
    load_model(checkpoint_info, already_loaded_state_dict=state_dict)
  File "/media/bcansin/ai/ai/stable-diffusion-webui/modules/sd_models.py", line 504, in load_model
    sd_model = instantiate_from_config(sd_config.model)
  File "/media/bcansin/ai/ai/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/util.py", line 89, in instantiate_from_config
    return get_obj_from_str(config["target"])(**config.get("params", dict()))
  File "/media/bcansin/ai/ai/stable-diffusion-webui/repositories/generative-models/sgm/models/diffusion.py", line 65, in __init__
    self._init_first_stage(first_stage_config)
  File "/media/bcansin/ai/ai/stable-diffusion-webui/repositories/generative-models/sgm/models/diffusion.py", line 106, in _init_first_stage
    model = instantiate_from_config(config).eval()
  File "/media/bcansin/ai/ai/stable-diffusion-webui/repositories/generative-models/sgm/util.py", line 175, in instantiate_from_config
    return get_obj_from_str(config["target"])(**config.get("params", dict()))
  File "/media/bcansin/ai/ai/stable-diffusion-webui/repositories/generative-models/sgm/models/autoencoder.py", line 295, in __init__
    self.encoder = Encoder(**ddconfig)
  File "/media/bcansin/ai/ai/stable-diffusion-webui/repositories/generative-models/sgm/modules/diffusionmodules/model.py", line 553, in __init__
    self.mid.attn_1 = make_attn(block_in, attn_type=attn_type)
  File "/media/bcansin/ai/ai/stable-diffusion-webui/repositories/generative-models/sgm/modules/diffusionmodules/model.py", line 286, in make_attn
    assert XFORMERS_IS_AVAILABLE, (
AssertionError: We do not support vanilla attention in 1.13.1+rocm5.2 anymore, as it is too expensive. Please install xformers via e.g. 'pip install xformers==0.0.16'

@KEDI103
Copy link

KEDI103 commented Jul 17, 2023

and also if I try with newest dev or stable one of pytorch this happend well I guess I really made big mistake to buy AMD Radeon VII
#10873
And also heres pytorch/rocm offical github no reply yet... so yeah if you got AMD you can't.
pytorch/pytorch#103973
ROCm/ROCm#2314

@gmonsoon
Copy link

Is there an existing issue for this?

  • I have searched the existing issues and checked the recent builds/commits

What happened?

Trying to load the sd_xl_base_0.9.safetensors model ends with:

Creating model from config: /dockerx/repositories/generative-models/configs/inference/sd_xl_base.yaml
Failed to create model quickly; will retry using slow method.

The file is in the folder present and should be readable.

Inside an docker container on a linux maschine with an AMD card. Maybe i missed a step.

Steps to reproduce the problem

Load sd_xl_base_0.9.safetensors via UI.

What should have happened?

Model should be loaded

Version or Commit where the problem happens

14cf434

What Python version are you running on ?

Python 3.9.x (below, no recommended)

What platforms do you use to access the UI ?

Linux

What device are you running WebUI on?

AMD GPUs (RX 6000 above)

Cross attention optimization

Automatic

What browsers do you use to access the UI ?

Google Chrome

Command Line Arguments

TORCH_COMMAND='pip install  torch==1.13.1+rocm5.2 torchvision==0.14.1+rocm5.2 --index-url https://download.pytorch.org/whl/rocm5.2' python launch.py --precision full --no-half --lyco-dir /dockerx/models/lycoris/

List of extensions

PBRemTools https://github.com/mattyamonaca/PBRemTools.git main d0fee2a8 Thu Jun 29 14:08:23 2023 latest
Stable-Diffusion-Webui-Civitai-Helper https://github.com/butaixianran/Stable-Diffusion-Webui-Civitai-Helper.git main 920ca326 Tue May 23 11:53:22 2023 latest
a1111-sd-webui-lycoris https://github.com/KohakuBlueleaf/a1111-sd-webui-lycoris.git main 8e97bf54 Sun Jul 9 07:44:58 2023 latest
a1111-sd-webui-tagcomplete https://github.com/DominikDoom/a1111-sd-webui-tagcomplete.git main 737b697 Sat Jul 8 16:03:44 2023 latest
model_preset_manager https://github.com/rifeWithKaiju/model_preset_manager.git main 4e25eebd Tue May 30 15:07:37 2023 latest
sd-webui-controlnet https://github.com/Mikubill/sd-webui-controlnet.git main 07bed6c Sat Jul 8 20:54:01 2023 latest
stable-diffusion-webui-eyemask https://github.com/ilian6806/stable-diffusion-webui-eyemask.git master 7b803a43 Fri Jun 2 11:15:19 2023 latest
PBRemTools https://github.com/mattyamonaca/PBRemTools.git main d0fee2a8 Thu Jun 29 14:08:23 2023 latest Stable-Diffusion-Webui-Civitai-Helper https://github.com/butaixianran/Stable-Diffusion-Webui-Civitai-Helper.git main 920ca326 Tue May 23 11:53:22 2023 latest a1111-sd-webui-lycoris https://github.com/KohakuBlueleaf/a1111-sd-webui-lycoris.git main 8e97bf54 Sun Jul 9 07:44:58 2023 latest a1111-sd-webui-tagcomplete https://github.com/DominikDoom/a1111-sd-webui-tagcomplete.git main 737b6973 Sat Jul 8 16:03:44 2023 latest model_preset_manager https://github.com/rifeWithKaiju/model_preset_manager.git main 4e25eebd Tue May 30 15:07:37 2023 latest sd-webui-controlnet https://github.com/Mikubill/sd-webui-controlnet.git main 07bed6cc Sat Jul 8 20:54:01 2023 latest stable-diffusion-webui-eyemask https://github.com/ilian6806/stable-diffusion-webui-eyemask.git master 7b803a43 Fri Jun 2 11:15:19 2023 latest

Console logs

> Creating model from config: /dockerx/repositories/generative-models/configs/inference/sd_xl_base.yaml
> Failed to create model quickly; will retry using slow method.

Additional information

Its running in the following docker container image:

rocm/pytorch:rocm5.5_ubuntu20.04_py3.8_pytorch_1.13.1

Try to load Fp16 model version, and use Fp16 VAE too. I think "Failed to create model quickly; will retry using slow method." error is related to not enough VRAM

@Cathy0908
Copy link

Cathy0908 commented Jul 18, 2023

i solved it by "pip install open-clip-torch==2.20.0"

@SansQuartier
Copy link

I'm having the same issue with a Nvidia RTX 2070 Max-Q so it's not an AMD issue

@KEDI103
Copy link

KEDI103 commented Jul 19, 2023

I'm having the same issue with a Nvidia RTX 2070 Max-Q so it's not an AMD issue

For my test it looks so much like pytorch version my card won't support 2.0.1 but if I run with it it can open model but thx to rocm buggy for gfx906 and ending support in next release I guess I can't never generate at pytorch 2 or more newer.

@Alexandre-Fernandez
Copy link

Alexandre-Fernandez commented Jul 27, 2023

I'm having the same issue under Ubuntu 22.04 with a Nvidia RTW 3090 when trying to load SDXL.
After getting the Failed to create model quickly; will retry using slow method it takes all fills up all my RAM.
I tried to leave it for 10 minutes or so at 100% RAM but nothing happened.

EDIT: I deleted the stable-diffusion-webui directory, cloned a new one, started webui.sh, let it reinstall everything and now it works.I'm guessing an outdated dependency or something like that (I already was on the last AUTOMATIC1111/stable-diffusion-webui version when I got the error by the way).

@alinabel
Copy link

I'm getting same problem, also it downloading 10G file and my internet is suck with cmd, i need to download the file manually but where to put it ? any idea ?
Loading weights [31e35c80fc] from D:\stable-diffusion-webui\models\Stable-diffusion\sd_xl_base_1.0.safetensors2.89it/s] Creating model from config: D:\stable-diffusion-webui\repositories\generative-models\configs\inference\sd_xl_base.yaml Failed to create model quickly; will retry using slow method. Downloading (…)ip_pytorch_model.bin: 6%|██▍ | 619M/10.2G

@Luk928
Copy link

Luk928 commented Jul 28, 2023

I fixed this issue in my local virtual environment by running the webui-user script with --reinstall-torch commandline arg, then I ran it again with --xformers commandline arg. This forces the webui.py script to reinstall or update these modules and now I'm able to use both normal SDXL and DreamShaper XL.

@afaan13
Copy link

afaan13 commented Jul 28, 2023

I fixed this issue in my local virtual environment by running the webui-user script with --reinstall-torch commandline arg, then I ran it again with --xformers commandline arg. This forces the webui.py script to reinstall or update these modules and now I'm able to use both normal SDXL and DreamShaper

How Much VRAM you Have?

Should i Run with this command like this:

@echo off

set PYTHON=
set GIT=
set VENV_DIR=
set COMMANDLINE_ARGS=--xformers --reinstall-torch

call webui.bat
 

@afaan13
Copy link

afaan13 commented Jul 28, 2023

I have a 6GB Gtx 1660 SUPER GPU and 8 GB RAM and SD works fine for me except for this new model loading problem:


  File "F:\Stable Diffusion Final\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\conv.py", line 137, in __init__
    self.weight = Parameter(torch.empty(
RuntimeError: [enforce fail at ..\c10\core\impl\alloc_cpu.cpp:72] data. DefaultCPUAllocator: not enough memory: you tried to allocate 58982400 bytes.


Stable diffusion model failed to load

I have Tried All the methods to getrid of this error but i cant and let me tell you i have one Old model which is 10Gb and i can load that but i cant load this 6Gb Model sd_xl LOL

@Luk928
Copy link

Luk928 commented Jul 28, 2023

How Much VRAM you Have?

Should i Run with this command like this:

@echo off

set PYTHON=
set GIT=
set VENV_DIR=
set COMMANDLINE_ARGS=--xformers --reinstall-torch

call webui.bat
 

I have 8GB RTX 3060.

Also when I tried running with COMMANDLINE_ARGS=--xformers --reinstall-torch it didn't work, I had to run COMMANDLINE_ARGS=--reinstall-torch, and then run again with COMMANDLINE_ARGS=--xformers, and I have no idea why it only worked that way.

You could also try using --lowram &--lowvramor --medram & --medvram commandline args, but I'm not sure if the XL models support them.

@KEDI103
Copy link

KEDI103 commented Jul 31, 2023

Okey I managed generate SDXL 1 or Dream SDXL or etc..
with invoke ai v3.0.1rc3 with
pip install torch==1.13.1+rocm5.2 torchvision==0.14.1+rocm5.2 torchaudio==0.13.1 --extra-index-url https://download.pytorch.org/whl/rocm5.2


[2023-07-31 04:23:44,153]::[InvokeAI]::INFO --> Converting /media/b_cansin/ai/ai/stable-diffusion-webui/models/Stable-diffusion/dreamshaperXL10_alpha2Xl10/dreamshaperXL10_alpha2Xl10.safetensors to diffusers format
[2023-07-31 04:31:29,052]::[InvokeAI]::INFO --> Loading model /media/b_cansin/ai/ai/InvokeAI-Installer/models/.cache/31f8e9c5a9debddf84f4c2a4bb8758aa, type sdxl:main:tokenizer
[2023-07-31 04:31:31,240]::[InvokeAI]::INFO --> Loading model /media/b_cansin/ai/ai/InvokeAI-Installer/models/.cache/31f8e9c5a9debddf84f4c2a4bb8758aa, type sdxl:main:text_encoder
[2023-07-31 04:31:44,742]::[InvokeAI]::INFO --> Loading model /media/b_cansin/ai/ai/InvokeAI-Installer/models/.cache/31f8e9c5a9debddf84f4c2a4bb8758aa, type sdxl:main:tokenizer_2
[2023-07-31 04:31:45,297]::[InvokeAI]::INFO --> Loading model /media/b_cansin/ai/ai/InvokeAI-Installer/models/.cache/31f8e9c5a9debddf84f4c2a4bb8758aa, type sdxl:main:text_encoder_2
[2023-07-31 04:33:54,833]::[InvokeAI]::INFO --> Loading model /media/b_cansin/ai/ai/InvokeAI-Installer/models/.cache/31f8e9c5a9debddf84f4c2a4bb8758aa, type sdxl:main:scheduler
[2023-07-31 04:33:55,596]::[InvokeAI]::INFO --> Loading model /media/b_cansin/ai/ai/InvokeAI-Installer/models/.cache/31f8e9c5a9debddf84f4c2a4bb8758aa, type sdxl:main:unet
 93%|████████████████████████████████████████████████████████████████████████████████████▉      | 140/150 [04:51<00:20,  2.08s/it]
[2023-07-31 04:40:00,087]::[InvokeAI]::INFO --> Loading model /media/b_cansin/ai/ai/InvokeAI-Installer/models/.cache/31f8e9c5a9debddf84f4c2a4bb8758aa, type sdxl:main:vae

And still that version can generate but for AUTOMATIC1111
version: [1.5.1](https://github.com/AUTOMATIC1111/stable-diffusion-webui/commit/7ba3923d5b494b7756d0b12f33acb3716d830b9a)  •  python: 3.10.6  •  torch: 1.13.1+rocm5.2  •  xformers: N/A  •  gradio: 3.32.0  •  checkpoint: [879db523c3](https://google.com/search?q=879db523c30d3b9017143d56705015e15a2cb5628762c11d086fed9538abd7fd)

Calculating sha256 for /media/b_cansin/ai/ai/stable-diffusion-webui/models/Stable-diffusion/animeArtDiffusionXL_alpha3/animeArtDiffusionXL_alpha3.safetensors: 53bb4fdc63b36014201f2789eab73f3b2b3569a2a9a57b3efb28d4f17283e4c4
Loading weights [53bb4fdc63] from /media/b_cansin/ai/ai/stable-diffusion-webui/models/Stable-diffusion/animeArtDiffusionXL_alpha3/animeArtDiffusionXL_alpha3.safetensors
Creating model from config: /media/b_cansin/ai/ai/stable-diffusion-webui/repositories/generative-models/configs/inference/sd_xl_base.yaml
creating model quickly: AssertionError
Traceback (most recent call last):
  File "/usr/lib/python3.10/threading.py", line 973, in _bootstrap
    self._bootstrap_inner()
  File "/usr/lib/python3.10/threading.py", line 1016, in _bootstrap_inner
    self.run()
  File "/media/b_cansin/ai/ai/stable-diffusion-webui/venv/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 807, in run
    result = context.run(func, *args)
  File "/media/b_cansin/ai/ai/stable-diffusion-webui/modules/ui_settings.py", line 272, in <lambda>
    fn=lambda value, k=k: self.run_settings_single(value, key=k),
  File "/media/b_cansin/ai/ai/stable-diffusion-webui/modules/ui_settings.py", line 90, in run_settings_single
    if not opts.set(key, value):
  File "/media/b_cansin/ai/ai/stable-diffusion-webui/modules/shared.py", line 633, in set
    self.data_labels[key].onchange()
  File "/media/b_cansin/ai/ai/stable-diffusion-webui/modules/call_queue.py", line 14, in f
    res = func(*args, **kwargs)
  File "/media/b_cansin/ai/ai/stable-diffusion-webui/webui.py", line 238, in <lambda>
    shared.opts.onchange("sd_model_checkpoint", wrap_queued_call(lambda: modules.sd_models.reload_model_weights()), call=False)
  File "/media/b_cansin/ai/ai/stable-diffusion-webui/modules/sd_models.py", line 582, in reload_model_weights
    load_model(checkpoint_info, already_loaded_state_dict=state_dict)
  File "/media/b_cansin/ai/ai/stable-diffusion-webui/modules/sd_models.py", line 498, in load_model
    sd_model = instantiate_from_config(sd_config.model)
  File "/media/b_cansin/ai/ai/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/util.py", line 89, in instantiate_from_config
    return get_obj_from_str(config["target"])(**config.get("params", dict()))
  File "/media/b_cansin/ai/ai/stable-diffusion-webui/repositories/generative-models/sgm/models/diffusion.py", line 65, in __init__
    self._init_first_stage(first_stage_config)
  File "/media/b_cansin/ai/ai/stable-diffusion-webui/repositories/generative-models/sgm/models/diffusion.py", line 106, in _init_first_stage
    model = instantiate_from_config(config).eval()
  File "/media/b_cansin/ai/ai/stable-diffusion-webui/repositories/generative-models/sgm/util.py", line 175, in instantiate_from_config
    return get_obj_from_str(config["target"])(**config.get("params", dict()))
  File "/media/b_cansin/ai/ai/stable-diffusion-webui/repositories/generative-models/sgm/models/autoencoder.py", line 295, in __init__
    self.encoder = Encoder(**ddconfig)
  File "/media/b_cansin/ai/ai/stable-diffusion-webui/repositories/generative-models/sgm/modules/diffusionmodules/model.py", line 553, in __init__
    self.mid.attn_1 = make_attn(block_in, attn_type=attn_type)
  File "/media/b_cansin/ai/ai/stable-diffusion-webui/
![Screenshot from 2023-07-31 05-20-18](https://github.com/AUTOMATIC1111/stable-diffusion-webui/assets/52115509/9d86ad68-9186-4361-be0f-f9097ebc4544)
repositories/generative-models/sgm/modules/diffusionmodules/model.py", line 286, in make_attn
    assert XFORMERS_IS_AVAILABLE, (
AssertionError: We do not support vanilla attention in 1.13.1+rocm5.2 anymore, as it is too expensive. Please install xformers via e.g. 'pip install xformers==0.0.16'

Thats getting intersting also before this I fight with Segmentation fault (core dumped) even crash python for only happend on AUTOMATIC1111 even other AIs working.

@tangcan1600
Copy link

good,I solved it,thanks

@grigio
Copy link

grigio commented Aug 18, 2023

@tangcan1600 how did you solve this ?

I still have this issue sdwebui works with sd 1.5 but not with sdxl

stablediff-rocm-runner | Loading weights [31e35c80fc] from /stablediff-web/models/Stable-diffusion/Stable-diffusion/sdxl/sd_xl_base_1.0.safetensors
stablediff-rocm-runner | Creating model from config: /stablediff-web/repositories/generative-models/configs/inference/sd_xl_base.yaml
stablediff-rocm-runner | Failed to create model quickly; will retry using slow method.

@splex7
Copy link

splex7 commented Aug 29, 2023

photo_2023-08-30_00-10-03
Memory usage peaked as soon as the SDXL model was loaded.

I was using GPU 12GB VRAM RTX 3060. The journey with SD1.5 has been pleasant for the last few months. I ran into a problem with SDXL not loading properly in Automatic1111 Version 1.5. The characteristic situation was severe system-wide stuttering that I never experienced before. I was using 8gbX2 of dual channel system memory, so I just upgraded to 16gbx2. Immediately after the upgrade, I loaded the SDXL base model through the same process and confirmed that it loaded successfully. I don't think this is the only solution, but it is one of the physically possible solutions.

@aaroncastle
Copy link

Solution:

  1. Create the path "openai" under the path "stable-diffusion" (whichever is your path name).
  2. Use git to clone the repository under the "openai" path: "https://www.modelscope.cn/AI-ModelScope/clip-vit-large-patch14. git".
  3. Once done, restart webui.sh or webui.bat(windows pc) under stable-diffusion.
  4. This issue is not related to Nvidia or AMD graphics cards.
  5. You'll notice that your pages using the nginx reverse proxy are also extremely smooth!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug-report Report of a bug, yet to be confirmed
Projects
None yet
Development

No branches or pull requests

14 participants