Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: fastapi 0.91 app.add_middleware #7714

Closed
1 task done
JustFrederik opened this issue Feb 10, 2023 · 67 comments · Fixed by #7717
Closed
1 task done

[Bug]: fastapi 0.91 app.add_middleware #7714

JustFrederik opened this issue Feb 10, 2023 · 67 comments · Fixed by #7717
Labels
bug Report of a confirmed bug upstream Issue or feature that must be resolved upstream

Comments

@JustFrederik
Copy link

Is there an existing issue for this?

  • I have searched the existing issues and checked the recent builds/commits

What happened?

I tried to run the project. It failed to attach the middleware in 'webui.py' line 232, because the server was already running. I downgraded fastapi to 0.90 and it works fine.

Steps to reproduce the problem

  1. clone project
  2. run webui.sh

What should have happened?

The the should have started

Commit where the problem happens

ea9bd9f

What platforms do you use to access the UI ?

Linux

What browsers do you use to access the UI ?

Mozilla Firefox

Command Line Arguments

No

List of extensions

No

Console logs

File "/home/user/code/sdw/webui.py", line 232, in webui
app.add_middleware(GZipMiddleware, minimum_size=1000)
File "/home/user/code/sdw/venv/lib/python3.10/site-packages/starletee/application.py", line 135, in add_middleware raise RuntimeError("Cannot add middleware after an application has started")

Additional information

Fastapi version causes problem
no changes to project
using arch

@JustFrederik JustFrederik added the bug-report Report of a bug, yet to be confirmed label Feb 10, 2023
@etherealxx
Copy link

can confirm. i just installed webui two times and that happens too. Mine on windows

venv "I:\stable-diffusion-webui\venv\Scripts\Python.exe"
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug  1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Commit hash: ea9bd9fc7409109adcd61b897abc2c8881161256
Installing torch and torchvision
Looking in indexes: https://pypi.org/simple, https://download.pytorch.org/whl/cu117
Collecting torch==1.13.1+cu117
  Using cached https://download.pytorch.org/whl/cu117/torch-1.13.1%2Bcu117-cp310-cp310-win_amd64.whl (2255.4 MB)
Collecting torchvision==0.14.1+cu117
  Using cached https://download.pytorch.org/whl/cu117/torchvision-0.14.1%2Bcu117-cp310-cp310-win_amd64.whl (4.8 MB)
Collecting typing-extensions
  Using cached typing_extensions-4.4.0-py3-none-any.whl (26 kB)
Collecting pillow!=8.3.*,>=5.3.0
  Using cached Pillow-9.4.0-cp310-cp310-win_amd64.whl (2.5 MB)
Collecting numpy
  Using cached numpy-1.24.2-cp310-cp310-win_amd64.whl (14.8 MB)
Collecting requests
  Using cached requests-2.28.2-py3-none-any.whl (62 kB)
Collecting certifi>=2017.4.17
  Using cached certifi-2022.12.7-py3-none-any.whl (155 kB)
Collecting urllib3<1.27,>=1.21.1
  Using cached urllib3-1.26.14-py2.py3-none-any.whl (140 kB)
Collecting charset-normalizer<4,>=2
  Using cached charset_normalizer-3.0.1-cp310-cp310-win_amd64.whl (96 kB)
Collecting idna<4,>=2.5
  Using cached idna-3.4-py3-none-any.whl (61 kB)
Installing collected packages: charset-normalizer, urllib3, typing-extensions, pillow, numpy, idna, certifi, torch, requests, torchvision
Successfully installed certifi-2022.12.7 charset-normalizer-3.0.1 idna-3.4 numpy-1.24.2 pillow-9.4.0 requests-2.28.2 torch-1.13.1+cu117 torchvision-0.14.1+cu117 typing-extensions-4.4.0 urllib3-1.26.14

[notice] A new release of pip available: 22.2.1 -> 23.0
[notice] To update, run: I:\stable-diffusion-webui\venv\Scripts\python.exe -m pip install --upgrade pip
Installing gfpgan
Installing clip
Installing open_clip
Cloning Stable Diffusion into repositories\stable-diffusion-stability-ai...
Cloning Taming Transformers into repositories\taming-transformers...
Cloning K-diffusion into repositories\k-diffusion...
Cloning CodeFormer into repositories\CodeFormer...
Cloning BLIP into repositories\BLIP...
Installing requirements for CodeFormer
Installing requirements for Web UI
Launching Web UI with arguments:
No module 'xformers'. Proceeding without it.
Moving kl-f8-anime2.ckpt from I:\stable-diffusion-webui\models to I:\stable-diffusion-webui\models\Stable-diffusion.
Calculating sha256 for I:\stable-diffusion-webui\models\Stable-diffusion\kl-f8-anime2.ckpt: df3c506e51b7ee1d7b5a6a2bb7142d47d488743c96aa778afb0f53a2cdc2d38d
Loading weights [df3c506e51] from I:\stable-diffusion-webui\models\Stable-diffusion\kl-f8-anime2.ckpt
Creating model from config: I:\stable-diffusion-webui\configs\v1-inference.yaml
LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 859.52 M params.
Applying cross attention optimization (Doggettx).
Textual inversion embeddings loaded(0):
Model loaded in 27.0s (calculate hash: 2.1s, load weights from disk: 0.3s, create model: 12.9s, apply half(): 1.4s, move model to device: 1.3s, load textual inversion embeddings: 8.9s).
Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
Traceback (most recent call last):
  File "I:\stable-diffusion-webui\launch.py", line 361, in <module>
    start()
  File "I:\stable-diffusion-webui\launch.py", line 356, in start
    webui.webui()
  File "I:\stable-diffusion-webui\webui.py", line 232, in webui
    app.add_middleware(GZipMiddleware, minimum_size=1000)
  File "I:\stable-diffusion-webui\venv\lib\site-packages\starlette\applications.py", line 135, in add_middleware
    raise RuntimeError("Cannot add middleware after an application has started")
RuntimeError: Cannot add middleware after an application has started
Press any key to continue . . .

@GingerSkippy
Copy link

I am having the same issue as the above

Traceback (most recent call last):
  File "D:\stable-diffusion-webui-master\launch.py", line 361, in <module>
    start()
  File "D:\stable-diffusion-webui-master\launch.py", line 356, in start
    webui.webui()
  File "D:\stable-diffusion-webui-master\webui.py", line 232, in webui
    app.add_middleware(GZipMiddleware, minimum_size=1000)
  File "D:\stable-diffusion-webui-master\venv\lib\site-packages\starlette\applications.py", line 135, in add_middleware
    raise RuntimeError("Cannot add middleware after an application has started")
RuntimeError: Cannot add middleware after an application has started
Press any key to continue . . .

@Malegiraldo22
Copy link

Happens as well on the colab version

Loading weights [eb172d270d] from /content/drive/MyDrive/AI/models/HassanBlend.ckpt
Creating model from config: /content/stable-diffusion-webui/configs/v1-inference.yaml
LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 859.52 M params.
Applying xformers cross attention optimization.
Textual inversion embeddings loaded(0):
Model loaded in 39.2s (load weights from disk: 27.6s, create model: 3.5s, apply weights to model: 5.3s, apply half(): 1.6s, load VAE: 0.3s, move model to device: 0.8s).
Running on local URL: http://127.0.0.1:7860/
Running on public URL: https://d0865c11-9b56-488f.gradio.live/

This share link expires in 72 hours. For free permanent hosting and GPU upgrades (NEW!), check out Spaces: https://huggingface.co/spaces
Traceback (most recent call last):
File "/content/stable-diffusion-webui/launch.py", line 361, in
start()
File "/content/stable-diffusion-webui/launch.py", line 356, in start
webui.webui()
File "/content/stable-diffusion-webui/webui.py", line 232, in webui
app.add_middleware(GZipMiddleware, minimum_size=1000)
File "/usr/local/envs/automatic/lib/python3.10/site-packages/starlette/applications.py", line 135, in add_middleware
raise RuntimeError("Cannot add middleware after an application has started")
RuntimeError: Cannot add middleware after an application has started
Killing tunnel 127.0.0.1:7860 <> https://d0865c11-9b56-488f.gradio.live/

@panta5
Copy link

panta5 commented Feb 10, 2023

I'm having the same problem.
Perhaps the problem is due to fastapi
I have temporarily resolved by downgrade to 0.90.0

@NCTUyoung
Copy link

downgrade fastapi to version 0.89.1 should solve the issue

@mews-se
Copy link

mews-se commented Feb 10, 2023

Is there a "guide" for how to do that on Windows?

@JustFrederik
Copy link
Author

0.90 is working tools and this is how u downgrade: https://stackoverflow.com/questions/5226311/installing-specific-package-version-with-pip#5226504

@martinjonesbe
Copy link

pip install fastapi==0.90 or 0.89.1 didn't help for me.

@panta5
Copy link

panta5 commented Feb 10, 2023

pip install fastapi==0.90 or 0.89.1 didn't help for me.

you can try with --upgrade flag.

@JustFrederik
Copy link
Author

my bad version 0.90. doesn’t exist. It would be 0.90.1

@rodrigo-barraza
Copy link

rodrigo-barraza commented Feb 10, 2023

Broken for me on Windows and Ubuntu.

How to fix:

  1. Navigate to your Stable Diffusion folder, for me it is: ~/artificial-intelligence-tools/stable-diffusion-webui
  2. Activate your Stable Diffusion environment: source ./venv/bin/activate
  3. Downgrade fastapi: pip install fastapi==0.90.1
  4. Exit the Stable Diffusion environment: deactivate
  5. Start Stable Diffusion how you normally do.
  6. Generate away!

@kamilkrzywda
Copy link

kamilkrzywda commented Feb 10, 2023

it's this change:
encode/starlette@51c1de1
they add some kind of check

@konstdea
Copy link

Broken for me on Windows and Ubuntu.

How to fix:

  1. Navigate to your Stable Diffusion folder, for me it is: ~/artificial-intelligence-tools/stable-diffusion-webui
  2. Activate your Stable Diffusion environment: source ./venv/bin/activate
  3. Downgrade fastapi: pip install fastapi==0.90.1
  4. Exit the Stable Diffusion environment: deactivate
  5. Start Stable Diffusion how you normally do.
  6. Generate away!

It helps me to fix. thanks

@DrChristophFH
Copy link

Broken for me on Windows and Ubuntu.

How to fix:

1. Navigate to your Stable Diffusion folder, for me it is: `~/artificial-intelligence-tools/stable-diffusion-webui`

2. Activate your Stable Diffusion environment: `source ./venv/bin/activate`

3. Downgrade fastapi: `pip install fastapi==0.90.1`

4. Exit the Stable Diffusion environment: `deactivate`

5. Start Stable Diffusion how you normally do.

6. Generate away!

What to do when one doesn't even have /bin/ yet if I might ask?

@camenduru
Copy link
Collaborator

camenduru commented Feb 10, 2023

!sed -i '$a fastapi==0.90.0' requirements_versions.txt for colab
add this line fastapi==0.90.0 in requirements_versions.txt for pc linux unix macos samsung refrigerators ...

@mews-se
Copy link

mews-se commented Feb 10, 2023

For Windows I started a terminal in my stable diffusion folder and ran

.\venv\Scripts\python.exe -m pip install --upgrade fastapi==0.90.1

That solved my problem

@etherealxx
Copy link

etherealxx commented Feb 10, 2023

soo is it 0.90.1 or 0.90.0?

@rodrigo-barraza
Copy link

rodrigo-barraza commented Feb 10, 2023

What to do when one doesn't even have /bin/ yet if I might ask?

Start your environment first to get everything where it needs to be: bash <(wget -qO- https://raw.githubusercontent.com/AUTOMATIC1111/stable-diffusion-webui/master/webui.sh) or if you're more familiar with this, run the webui.sh or webui-user.sh files. Then proceed with the instructions above.

@rodrigo-barraza
Copy link

soo is it 0.90.1 or 0.90.0?

Either or should work I believe, but I had success with 0.90.1.

@Blood4u
Copy link

Blood4u commented Feb 10, 2023

Downgrading to 0.90.0 or 0.90.1 or 0.89.0 did not do the trick for me i am receiveing the same error.

@photonstorm
Copy link

@mews-se thanks - that worked great for me (Windows 11), v0.90.1 seems to be just fine.

@Malegiraldo22
Copy link

!sed -i '$a fastapi==0.90.0' requirements_versions.txt for colab add this line fastapi==0.90.0 in requirements_versions.txt for pc linux unix macos samsung refrigerators ...

the colab line didn't work for me
/bin/bash: /usr/local/envs/automatic/lib/libtinfo.so.6: no version information available (required by /bin/bash)
sed: can't read requirements_versions.txt: No such file or directory

I guess that it happens because all the environment configuration ends inside a .tar.zst file but I don't know. I'm totally noob on this

@rar0n
Copy link

rar0n commented Feb 10, 2023

I had a similar issue. I've no idea if this is useful for anyone, just putting this out here:

I have 2 users on Linux Mint 20.3. User 1 installed it fine, no problem. But User 2 install script fail right after starting the server, and quits with the error message:

Traceback (most recent call last):
File "launch.py", line 361, in
start()
File "launch.py", line 356, in start
webui.webui()
File "/home/USER2/stable-diffusion-webui/webui.py", line 232, in webui
app.add_middleware(GZipMiddleware, minimum_size=1000)
File "/home/USER2/stable-diffusion-webui/venv/lib/python3.8/site-packages/starlette/applications.py", line 135, in add_middleware
raise RuntimeError("Cannot add middleware after an application has started")
RuntimeError: Cannot add middleware after an application has started

Both users installed it correctly afaik (pre-requisites etc). Simply deleting the stable-diffusion-webui folder and re-install resulted in the same error.
One thing is that User 2 had Anaconda installed prior (but deleted), for another version of stable diffusion (also deleted) (I think it was from this how-to: https://www.howtogeek.com/830179/how-to-run-stable-diffusion-on-your-pc-to-generate-ai-images/ )

I resorted to simply copy the install folder from user 1 and overwrite into user 2's install, and take ownership of the files.
That worked!

Apparently not everything got installed the for User 2, I've no idea why.

@camenduru
Copy link
Collaborator

@gihepa988
Copy link

I am running in paperspace, what should I do?
If anyone has the same environment, please advise.

@GoAwayNow
Copy link

@gihepa988 requirements_versions.txt doesn't seem reliable. Replacing in requirements.txt works for me instead.

!sed -i 's/fastapi/fastapi==0.90.1/' requirements.txt

@ClashSAN ClashSAN pinned this issue Feb 12, 2023
@dausdanger
Copy link

dausdanger commented Feb 12, 2023

pip install fastapi==0.90.1

For Windows I started a terminal in my stable diffusion folder and ran
.\venv\Scripts\python.exe -m pip install --upgrade fastapi==0.90.1
That solved my problem

why i got syntax error tho? any help

I'm getting the same error anyone have a solution

I solved it just go to requirement_version.txt and paste this "fastapi==0.90.1"

@vt-idiot
Copy link
Contributor

vt-idiot commented Feb 12, 2023

I just add a cell after running the requirements cell right before launching the webui and paste the pip command assuming it would overwrite all wrong versions of fastapi.

@jameslanman I'm using a notebook I wrote myself from bits and pieces of other notebooks, I don't have a "requirements cell" - just git cloning this repo, copying models/VAE files from various places, and copying my "saved" config.json and ui-config.json back into the Colab. I see a bunch of other packages in your error that I don't recognize - onnx, etc.

I just added:

!pip install --upgrade fastapi==0.90.1
!pip install gradio==3.16.2

And then some code from TheLastBen's colab notebook to copy blocks.py and activate local tunnel if I choose to use it and do some edits relating to that. Then launch.py runs.

You need to ask whoever wrote the notebook.

@tommyjohn81
Copy link

tommyjohn81 commented Feb 12, 2023

Unfortunately Dreambooth extension seems to be broken using fastapi==0.90.1, anybody else experiencing this?

Log:
Error executing callback app_started_callback for /content/stable-diffusion-webui/extensions/sd_dreambooth_extension/scripts/api.py
Traceback (most recent call last):
File "/content/stable-diffusion-webui/modules/script_callbacks.py", line 88, in app_started_callback
c.callback(demo, app)
File "/content/stable-diffusion-webui/extensions/sd_dreambooth_extension/scripts/api.py", line 141, in dreambooth_api
async def cancel_jobs(
File "/opt/conda/lib/python3.10/site-packages/fastapi/routing.py", line 657, in decorator
self.add_api_route(
File "/opt/conda/lib/python3.10/site-packages/fastapi/routing.py", line 596, in add_api_route
route = route_class(
File "/opt/conda/lib/python3.10/site-packages/fastapi/routing.py", line 405, in init
self.response_field = create_response_field(
File "/opt/conda/lib/python3.10/site-packages/fastapi/utils.py", line 90, in create_response_field
raise fastapi.exceptions.FastAPIError(
fastapi.exceptions.FastAPIError: Invalid args for response field! Hint: check that typing.Union[extensions.sd_dreambooth_extension.dreambooth.db_shared.DreamState, starlette.responses.JSONResponse] is a valid Pydantic field type. If you are using a return type annotation that is not a valid Pydantic field (e.g. Union[Response, dict, None]) you can disable generating the response model from the type annotation with the path operation decorator parameter response_model=None. Read more: https://fastapi.tiangolo.com/tutorial/response-model/

@Shazz303
Copy link

I am having the same issue as the above

Traceback (most recent call last):
  File "D:\stable-diffusion-webui-master\launch.py", line 361, in <module>
    start()
  File "D:\stable-diffusion-webui-master\launch.py", line 356, in start
    webui.webui()
  File "D:\stable-diffusion-webui-master\webui.py", line 232, in webui
    app.add_middleware(GZipMiddleware, minimum_size=1000)
  File "D:\stable-diffusion-webui-master\venv\lib\site-packages\starlette\applications.py", line 135, in add_middleware
    raise RuntimeError("Cannot add middleware after an application has started")
RuntimeError: Cannot add middleware after an application has started
Press any key to continue . . .

go to your stable diffusion folder, you should see a file named webui.py, now edit the file with notepad. after opening it press ctrl+f (find) '' app.add_middleware(GZipMiddleware, minimum_size=1000) '' now cut the line and paste it above '' def webui(): '' keep a line space and then save the file. This should fix the error :)

@Lacan82
Copy link

Lacan82 commented Feb 13, 2023

Down Grading Fastapi does the trick.

pip install fastapi==0.89.1

@INFCreator
Copy link

2. ./venv/bin/activate

how to i activate SD environment (i am a newbie regarding programming terminologies). I don't know what to do once i open SD folder

@ClashSAN
Copy link
Collaborator

@INFCreator for windows: #7749 (reply in thread)

@jameslanman
Copy link

I just add a cell after running the requirements cell right before launching the webui and paste the pip command assuming it would overwrite all wrong versions of fastapi.

@jameslanman I'm using a notebook I wrote myself from bits and pieces of other notebooks, I don't have a "requirements cell" - just git cloning this repo, copying models/VAE files from various places, and copying my "saved" config.json and ui-config.json back into the Colab. I see a bunch of other packages in your error that I don't recognize - onnx, etc.

I just added:

!pip install --upgrade fastapi==0.90.1
!pip install gradio==3.16.2

And then some code from TheLastBen's colab notebook to copy blocks.py and activate local tunnel if I choose to use it and do some edits relating to that. Then launch.py runs.

You need to ask whoever wrote the notebook.

@vt-idiot Thank you! I appreciate the extra bit of advice. Time to get my hands dirty.

@vt-idiot
Copy link
Contributor

@jameslanman I don't think you need it anymore after the most recent commit?

@Agakishiev
Copy link

!sed -i '$a fastapi==0.90.0' requirements_versions.txt for colab add this line fastapi==0.90.0 in requirements_versions.txt for pc linux unix macos samsung refrigerators ...

I am not programmer. I dont undestand what i must to do.

@Agakishiev
Copy link

!sed -i '$a fastapi==0.90.0' requirements_versions.txt for colab add this line fastapi==0.90.0 in requirements_versions.txt for pc linux unix macos samsung refrigerators ...

I am not programmer. I dont undestand what i must to do.

in COLAB

@XksA-me
Copy link

XksA-me commented Feb 14, 2023

thanks, install this version to make my code run normally

pip install fastapi==0.90.0

@sundayhk
Copy link

System: WIN 10
Python: 3.10.6

❌ Wrong operation (Install dependencies directly):

python -m pip install --upgrade fastapi==0.90.1

✅Correct operation:
Go into your SD folder right click open terminal and past this in then hit enter.

cd stable-diffusion-webui
.\venv\Scripts\python.exe -m pip install --upgrade fastapi==0.90.1

https://www.reddit.com/r/StableDiffusion/comments/10yurxl/comment/j803rbq/

@stankawiky
Copy link

Down Grading Fastapi does the trick.

pip install fastapi==0.89.1

It doesn't.

nextfullstorm added a commit to Decentramind-io/stable_diffusion_docker that referenced this issue Feb 28, 2023
@master117
Copy link

Downgrading to 0.90.0 or 0.90.1 or 0.89.0 did not do the trick for me i am receiveing the same error.

Make sure you do it while in the proper environment

@Jonseed
Copy link

Jonseed commented Mar 17, 2023

Sometimes when I start the webui it upgrades fastapi to 0.94.0, which doesn't work, and fails to start the server. Why does the requirements_versions.txt file still install fastapi 0.94.0 if it is nonfunctional?

@Boldor83
Copy link

Boldor83 commented Apr 1, 2023

i got this error after switching back to master
i tried:
.\venv\Scripts\python.exe -m pip install --upgrade fastapi==0.90.1
adding
fastapi==0.90.0
in requierments
and editing webui.py like Shazz303 commented

none of this worked. i am now did checkout a9eab23 and it works again, but i would like to upgrade at some point..

@dischordo
Copy link

If you are doing the pip installs and it's not working, you might have
"set REQS_FILE=.\extensions\sd_dreambooth_extension\requirements.txt"
in your webui-user. Dreambooth asks for ~=0.94, so change it to ==0.90.1 and then do the fix.

@Boldor83
Copy link

Boldor83 commented Apr 9, 2023

i get this problem even without any extensions
edit: ok i got this working now, part might be due to norton virus scanner, i then did what rodrigo-barraza wrote, and now it works.

@SomeAB
Copy link

SomeAB commented Apr 10, 2023

I had everything working fine, I recently upgraded some modules, and it seems SD just auto upgraded fastapi.. and since I had seen the related error before, I knew what was wrong.

Here are my steps on Windows 11, Python 3.10

  • Use Powershell instead of Command Prompt on Windows
  • cd 'X:\LocationofSD-webui\ and hit enter (this will change your current directory to where you have installed SD)
  • .\venv\Scripts\python.exe -m pip install --upgrade fastapi==0.90.1
  • It will give you an error saying 'rembg needs fastapi to be .92.0 and numpy and sci-kit to be so on so version. Ignore it, since even fastapi 92.0 doesn't work either.

@rightova37
Copy link

trying this on windows but im having the same code over and over, can you help me resolve this issue?

venv "D:\stable-diffusion-webui-master\venv\Scripts\Python.exe"
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug  1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Commit hash: <none>
Installing requirements for Web UI
Launching Web UI with arguments:
No module 'xformers'. Proceeding without it.
LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 859.52 M params.
Loading weights [49b15f81] from D:\stable-diffusion-webui-master\models\Stable-diffusion\v2-1_512-ema-pruned.safetensors
Traceback (most recent call last):
  File "D:\stable-diffusion-webui-master\launch.py", line 295, in <module>
    start()
  File "D:\stable-diffusion-webui-master\launch.py", line 290, in start
    webui.webui()
  File "D:\stable-diffusion-webui-master\webui.py", line 133, in webui
    initialize()
  File "D:\stable-diffusion-webui-master\webui.py", line 63, in initialize
    modules.sd_models.load_model()
  File "D:\stable-diffusion-webui-master\modules\sd_models.py", line 313, in load_model
    load_model_weights(sd_model, checkpoint_info)
  File "D:\stable-diffusion-webui-master\modules\sd_models.py", line 197, in load_model_weights
    model.load_state_dict(sd, strict=False)
  File "D:\stable-diffusion-webui-master\venv\lib\site-packages\torch\nn\modules\module.py", line 1604, in load_state_dict
    raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for LatentDiffusion:
        size mismatch for model.diffusion_model.input_blocks.1.1.proj_in.weight: copying a param with shape torch.Size([320, 320]) from checkpoint, the shape in current model is torch.Size([320, 320, 1, 1]).
        size mismatch for model.diffusion_model.input_blocks.1.1.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([320, 1024]) from checkpoint, the shape in current model is torch.Size([320, 768]).
        size mismatch for model.diffusion_model.input_blocks.1.1.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([320, 1024]) from checkpoint, the shape in current model is torch.Size([320, 768]).
        size mismatch for model.diffusion_model.input_blocks.1.1.proj_out.weight: copying a param with shape torch.Size([320, 320]) from checkpoint, the shape in current model is torch.Size([320, 320, 1, 1]).
        size mismatch for model.diffusion_model.input_blocks.2.1.proj_in.weight: copying a param with shape torch.Size([320, 320]) from checkpoint, the shape in current model is torch.Size([320, 320, 1, 1]).
        size mismatch for model.diffusion_model.input_blocks.2.1.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([320, 1024]) from checkpoint, the shape in current model is torch.Size([320, 768]).
        size mismatch for model.diffusion_model.input_blocks.2.1.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([320, 1024]) from checkpoint, the shape in current model is torch.Size([320, 768]).
        size mismatch for model.diffusion_model.input_blocks.2.1.proj_out.weight: copying a param with shape torch.Size([320, 320]) from checkpoint, the shape in current model is torch.Size([320, 320, 1, 1]).
        size mismatch for model.diffusion_model.input_blocks.4.1.proj_in.weight: copying a param with shape torch.Size([640, 640]) from checkpoint, the shape in current model is torch.Size([640, 640, 1, 1]).
        size mismatch for model.diffusion_model.input_blocks.4.1.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([640, 1024]) from checkpoint, the shape in current model is torch.Size([640, 768]).
        size mismatch for model.diffusion_model.input_blocks.4.1.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([640, 1024]) from checkpoint, the shape in current model is torch.Size([640, 768]).
        size mismatch for model.diffusion_model.input_blocks.4.1.proj_out.weight: copying a param with shape torch.Size([640, 640]) from checkpoint, the shape in current model is torch.Size([640, 640, 1, 1]).
        size mismatch for model.diffusion_model.input_blocks.5.1.proj_in.weight: copying a param with shape torch.Size([640, 640]) from checkpoint, the shape in current model is torch.Size([640, 640, 1, 1]).
        size mismatch for model.diffusion_model.input_blocks.5.1.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([640, 1024]) from checkpoint, the shape in current model is torch.Size([640, 768]).
        size mismatch for model.diffusion_model.input_blocks.5.1.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([640, 1024]) from checkpoint, the shape in current model is torch.Size([640, 768]).
        size mismatch for model.diffusion_model.input_blocks.5.1.proj_out.weight: copying a param with shape torch.Size([640, 640]) from checkpoint, the shape in current model is torch.Size([640, 640, 1, 1]).
        size mismatch for model.diffusion_model.input_blocks.7.1.proj_in.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([1280, 1280, 1, 1]).
        size mismatch for model.diffusion_model.input_blocks.7.1.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([1280, 1024]) from checkpoint, the shape in current model is torch.Size([1280, 768]).
        size mismatch for model.diffusion_model.input_blocks.7.1.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([1280, 1024]) from checkpoint, the shape in current model is torch.Size([1280, 768]).
        size mismatch for model.diffusion_model.input_blocks.7.1.proj_out.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([1280, 1280, 1, 1]).
        size mismatch for model.diffusion_model.input_blocks.8.1.proj_in.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([1280, 1280, 1, 1]).
        size mismatch for model.diffusion_model.input_blocks.8.1.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([1280, 1024]) from checkpoint, the shape in current model is torch.Size([1280, 768]).
        size mismatch for model.diffusion_model.input_blocks.8.1.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([1280, 1024]) from checkpoint, the shape in current model is torch.Size([1280, 768]).
        size mismatch for model.diffusion_model.input_blocks.8.1.proj_out.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([1280, 1280, 1, 1]).
        size mismatch for model.diffusion_model.middle_block.1.proj_in.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([1280, 1280, 1, 1]).
        size mismatch for model.diffusion_model.middle_block.1.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([1280, 1024]) from checkpoint, the shape in current model is torch.Size([1280, 768]).
        size mismatch for model.diffusion_model.middle_block.1.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([1280, 1024]) from checkpoint, the shape in current model is torch.Size([1280, 768]).
        size mismatch for model.diffusion_model.middle_block.1.proj_out.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([1280, 1280, 1, 1]).
        size mismatch for model.diffusion_model.output_blocks.3.1.proj_in.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([1280, 1280, 1, 1]).
        size mismatch for model.diffusion_model.output_blocks.3.1.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([1280, 1024]) from checkpoint, the shape in current model is torch.Size([1280, 768]).
        size mismatch for model.diffusion_model.output_blocks.3.1.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([1280, 1024]) from checkpoint, the shape in current model is torch.Size([1280, 768]).
        size mismatch for model.diffusion_model.output_blocks.3.1.proj_out.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([1280, 1280, 1, 1]).
        size mismatch for model.diffusion_model.output_blocks.4.1.proj_in.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([1280, 1280, 1, 1]).
        size mismatch for model.diffusion_model.output_blocks.4.1.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([1280, 1024]) from checkpoint, the shape in current model is torch.Size([1280, 768]).
        size mismatch for model.diffusion_model.output_blocks.4.1.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([1280, 1024]) from checkpoint, the shape in current model is torch.Size([1280, 768]).
        size mismatch for model.diffusion_model.output_blocks.4.1.proj_out.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([1280, 1280, 1, 1]).
        size mismatch for model.diffusion_model.output_blocks.5.1.proj_in.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([1280, 1280, 1, 1]).
        size mismatch for model.diffusion_model.output_blocks.5.1.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([1280, 1024]) from checkpoint, the shape in current model is torch.Size([1280, 768]).
        size mismatch for model.diffusion_model.output_blocks.5.1.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([1280, 1024]) from checkpoint, the shape in current model is torch.Size([1280, 768]).
        size mismatch for model.diffusion_model.output_blocks.5.1.proj_out.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([1280, 1280, 1, 1]).
        size mismatch for model.diffusion_model.output_blocks.6.1.proj_in.weight: copying a param with shape torch.Size([640, 640]) from checkpoint, the shape in current model is torch.Size([640, 640, 1, 1]).
        size mismatch for model.diffusion_model.output_blocks.6.1.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([640, 1024]) from checkpoint, the shape in current model is torch.Size([640, 768]).
        size mismatch for model.diffusion_model.output_blocks.6.1.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([640, 1024]) from checkpoint, the shape in current model is torch.Size([640, 768]).
        size mismatch for model.diffusion_model.output_blocks.6.1.proj_out.weight: copying a param with shape torch.Size([640, 640]) from checkpoint, the shape in current model is torch.Size([640, 640, 1, 1]).
        size mismatch for model.diffusion_model.output_blocks.7.1.proj_in.weight: copying a param with shape torch.Size([640, 640]) from checkpoint, the shape in current model is torch.Size([640, 640, 1, 1]).
        size mismatch for model.diffusion_model.output_blocks.7.1.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([640, 1024]) from checkpoint, the shape in current model is torch.Size([640, 768]).
        size mismatch for model.diffusion_model.output_blocks.7.1.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([640, 1024]) from checkpoint, the shape in current model is torch.Size([640, 768]).
        size mismatch for model.diffusion_model.output_blocks.7.1.proj_out.weight: copying a param with shape torch.Size([640, 640]) from checkpoint, the shape in current model is torch.Size([640, 640, 1, 1]).
        size mismatch for model.diffusion_model.output_blocks.8.1.proj_in.weight: copying a param with shape torch.Size([640, 640]) from checkpoint, the shape in current model is torch.Size([640, 640, 1, 1]).
        size mismatch for model.diffusion_model.output_blocks.8.1.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([640, 1024]) from checkpoint, the shape in current model is torch.Size([640, 768]).
        size mismatch for model.diffusion_model.output_blocks.8.1.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([640, 1024]) from checkpoint, the shape in current model is torch.Size([640, 768]).
        size mismatch for model.diffusion_model.output_blocks.8.1.proj_out.weight: copying a param with shape torch.Size([640, 640]) from checkpoint, the shape in current model is torch.Size([640, 640, 1, 1]).
        size mismatch for model.diffusion_model.output_blocks.9.1.proj_in.weight: copying a param with shape torch.Size([320, 320]) from checkpoint, the shape in current model is torch.Size([320, 320, 1, 1]).
        size mismatch for model.diffusion_model.output_blocks.9.1.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([320, 1024]) from checkpoint, the shape in current model is torch.Size([320, 768]).
        size mismatch for model.diffusion_model.output_blocks.9.1.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([320, 1024]) from checkpoint, the shape in current model is torch.Size([320, 768]).
        size mismatch for model.diffusion_model.output_blocks.9.1.proj_out.weight: copying a param with shape torch.Size([320, 320]) from checkpoint, the shape in current model is torch.Size([320, 320, 1, 1]).
        size mismatch for model.diffusion_model.output_blocks.10.1.proj_in.weight: copying a param with shape torch.Size([320, 320]) from checkpoint, the shape in current model is torch.Size([320, 320, 1, 1]).
        size mismatch for model.diffusion_model.output_blocks.10.1.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([320, 1024]) from checkpoint, the shape in current model is torch.Size([320, 768]).
        size mismatch for model.diffusion_model.output_blocks.10.1.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([320, 1024]) from checkpoint, the shape in current model is torch.Size([320, 768]).
        size mismatch for model.diffusion_model.output_blocks.10.1.proj_out.weight: copying a param with shape torch.Size([320, 320]) from checkpoint, the shape in current model is torch.Size([320, 320, 1, 1]).
        size mismatch for model.diffusion_model.output_blocks.11.1.proj_in.weight: copying a param with shape torch.Size([320, 320]) from checkpoint, the shape in current model is torch.Size([320, 320, 1, 1]).
        size mismatch for model.diffusion_model.output_blocks.11.1.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([320, 1024]) from checkpoint, the shape in current model is torch.Size([320, 768]).
        size mismatch for model.diffusion_model.output_blocks.11.1.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([320, 1024]) from checkpoint, the shape in current model is torch.Size([320, 768]).
        size mismatch for model.diffusion_model.output_blocks.11.1.proj_out.weight: copying a param with shape torch.Size([320, 320]) from checkpoint, the shape in current model is torch.Size([320, 320, 1, 1]).

@Durpady
Copy link

Durpady commented Apr 24, 2023

go to your stable diffusion folder, you should see a file named webui.py, now edit the file with notepad. after opening it press ctrl+f (find) '' app.add_middleware(GZipMiddleware, minimum_size=1000) '' now cut the line and paste it above '' def webui(): '' keep a line space and then save the file. This should fix the error :)

webui-docker-auto-cpu-1  | Traceback (most recent call last):
webui-docker-auto-cpu-1  |   File "/stable-diffusion-webui/repositories/stable-diffusion/../../webui.py", line 202, in <module>
webui-docker-auto-cpu-1  |     webui()
webui-docker-auto-cpu-1  |   File "/stable-diffusion-webui/repositories/stable-diffusion/../../webui.py", line 173, in webui
webui-docker-auto-cpu-1  |     app.add_middleware(GZipMiddleware, minimum_size=1000)
webui-docker-auto-cpu-1  |   File "/usr/local/lib/python3.10/site-packages/starlette/applications.py", line 139, in add_middleware
webui-docker-auto-cpu-1  |     raise RuntimeError("Cannot add middleware after an application has started")
webui-docker-auto-cpu-1  | RuntimeError: Cannot add middleware after an application has started
webui-docker-auto-cpu-1 exited with code 1

I am running AbdBarho's Docker version, and attempting (as should be clear) to use the CPU version of Automatic's UI. I do not seem to have the webui.py file, as I can't find it (or /stable-diffusion-webui/, for that matter), so now I'm unsure of what to do. At this point the line to use has been made quite clear, but for the life of me I can't seem to find where to put it. As some others have indicated, I'm also a noob here, the vast majority of this I don't really understand.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Report of a confirmed bug upstream Issue or feature that must be resolved upstream
Projects
None yet