Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

auto (AUTOMATIC1111): TypeError: AsyncConnectionPool.__init__() got an unexpected keyword argument 'socket_options' #608

Closed
2 tasks done
cyril23 opened this issue Nov 4, 2023 · 9 comments
Labels
bug Something isn't working

Comments

@cyril23
Copy link

cyril23 commented Nov 4, 2023

Has this issue been opened before?

  • It is not in the FAQ, I checked.
  • It is not in the issues, I searched.

Describe the bug

I've tried to run the auto (AUTOMATIC1111) UI, following the wiki, and get the following error:

:~/stable-diffusion-webui-docker$ docker compose --profile auto up --build
[+] Building 0.9s (32/32) FINISHED                                                                                                            docker:default
 => [auto internal] load build definition from Dockerfile                                                                                               0.0s
 => => transferring dockerfile: 4.13kB                                                                                                                  0.0s
 => [auto internal] load .dockerignore                                                                                                                  0.0s
 => => transferring context: 2B                                                                                                                         0.0s
 => [auto internal] load metadata for docker.io/library/python:3.10.9-slim                                                                              0.8s
 => [auto internal] load metadata for docker.io/alpine/git:2.36.2                                                                                       0.8s
 => [auto internal] load metadata for docker.io/library/alpine:3.17                                                                                     0.8s
 => [auto internal] load build context                                                                                                                  0.0s
 => => transferring context: 149B                                                                                                                       0.0s
 => [auto download 1/8] FROM docker.io/alpine/git:2.36.2@sha256:ec491c893597b68c92b88023827faa771772cfd5e106b76c713fa5e1c75dea84                        0.0s
 => [auto xformers 1/3] FROM docker.io/library/alpine:3.17@sha256:f71a5f071694a785e064f05fed657bf8277f1b2113a8ed70c90ad486d6ee54dc                      0.0s
 => [auto stage-2  1/14] FROM docker.io/library/python:3.10.9-slim@sha256:76dd18d90a3d8710e091734bf2c9dd686d68747a51908db1e1f41e9a5ed4e2c5              0.0s
 => CACHED [auto stage-2  2/14] RUN --mount=type=cache,target=/var/cache/apt   apt-get update &&   apt-get install -y fonts-dejavu-core rsync git jq m  0.0s
 => CACHED [auto stage-2  3/14] RUN --mount=type=cache,target=/cache --mount=type=cache,target=/root/.cache/pip   aria2c -x 5 --dir /cache --out torch  0.0s
 => CACHED [auto stage-2  4/14] RUN --mount=type=cache,target=/root/.cache/pip   git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui.git  0.0s
 => CACHED [auto xformers 2/3] RUN apk add --no-cache aria2                                                                                             0.0s
 => CACHED [auto xformers 3/3] RUN aria2c -x 5 --dir / --out wheel.whl 'https://github.com/AbdBarho/stable-diffusion-webui-docker/releases/download/6.  0.0s
 => CACHED [auto stage-2  5/14] RUN --mount=type=cache,target=/root/.cache/pip    --mount=type=bind,from=xformers,source=/wheel.whl,target=/xformers-0  0.0s
 => CACHED [auto download 2/8] COPY clone.sh /clone.sh                                                                                                  0.0s
 => CACHED [auto download 3/8] RUN . /clone.sh stable-diffusion-stability-ai https://github.com/Stability-AI/stablediffusion.git cf1d67a6fd5ea1aa600c4  0.0s
 => CACHED [auto download 4/8] RUN . /clone.sh CodeFormer https://github.com/sczhou/CodeFormer.git c5b4593074ba6214284d6acd5f1719b6c5d739af   && rm -r  0.0s
 => CACHED [auto download 5/8] RUN . /clone.sh BLIP https://github.com/salesforce/BLIP.git 48211a1594f1321b00f14c9f7a5b4813144b2fb9                     0.0s
 => CACHED [auto download 6/8] RUN . /clone.sh k-diffusion https://github.com/crowsonkb/k-diffusion.git ab527a9a6d347f364e3d185ba6d714e22d80cb3c        0.0s
 => CACHED [auto download 7/8] RUN . /clone.sh clip-interrogator https://github.com/pharmapsychotic/clip-interrogator 2cf03aaf6e704197fd0dae7c7f96aa59  0.0s
 => CACHED [auto download 8/8] RUN . /clone.sh generative-models https://github.com/Stability-AI/generative-models 45c443b316737a4ab6e40413d7794a7f565  0.0s
 => CACHED [auto stage-2  6/14] COPY --from=download /repositories/ /stable-diffusion-webui/repositories/                                               0.0s
 => CACHED [auto stage-2  7/14] RUN mkdir /stable-diffusion-webui/interrogate && cp /stable-diffusion-webui/repositories/clip-interrogator/clip_interr  0.0s
 => CACHED [auto stage-2  8/14] RUN --mount=type=cache,target=/root/.cache/pip   pip install -r /stable-diffusion-webui/repositories/CodeFormer/requir  0.0s
 => CACHED [auto stage-2  9/14] RUN --mount=type=cache,target=/root/.cache/pip   pip install pyngrok   git+https://github.com/TencentARC/GFPGAN.git@8d  0.0s
 => CACHED [auto stage-2 10/14] RUN apt-get -y install libgoogle-perftools-dev && apt-get clean                                                         0.0s
 => CACHED [auto stage-2 11/14] RUN --mount=type=cache,target=/root/.cache/pip   cd stable-diffusion-webui &&   git fetch &&   git reset --hard 5ef669  0.0s
 => CACHED [auto stage-2 12/14] COPY . /docker                                                                                                          0.0s
 => CACHED [auto stage-2 13/14] RUN   python3 /docker/info.py /stable-diffusion-webui/modules/ui.py &&   mv /stable-diffusion-webui/style.css /stable-  0.0s
 => CACHED [auto stage-2 14/14] WORKDIR /stable-diffusion-webui                                                                                         0.0s
 => [auto] exporting to image                                                                                                                           0.0s
 => => exporting layers                                                                                                                                 0.0s
 => => writing image sha256:69edf3b65bda9c9842dbc8a1a7ec6c066ae531883dbf02352e37f3541c66ec59                                                            0.0s
 => => naming to docker.io/library/sd-auto:67                                                                                                           0.0s
[+] Running 1/0
 ✔ Container webui-docker-auto-1  Created                                                                                                               0.0s
Attaching to webui-docker-auto-1
webui-docker-auto-1  | Mounted .cache
webui-docker-auto-1  | Mounted config_states
webui-docker-auto-1  | Mounted .cache
webui-docker-auto-1  | Mounted embeddings
webui-docker-auto-1  | Mounted config.json
webui-docker-auto-1  | Mounted models
webui-docker-auto-1  | Mounted styles.csv
webui-docker-auto-1  | Mounted ui-config.json
webui-docker-auto-1  | Mounted extensions
webui-docker-auto-1  | Installing extension dependencies (if any)
webui-docker-auto-1  | Traceback (most recent call last):
webui-docker-auto-1  |   File "/stable-diffusion-webui/webui.py", line 13, in <module>
webui-docker-auto-1  |     initialize.imports()
webui-docker-auto-1  |   File "/stable-diffusion-webui/modules/initialize.py", line 21, in imports
webui-docker-auto-1  |     import gradio  # noqa: F401
webui-docker-auto-1  |   File "/usr/local/lib/python3.10/site-packages/gradio/__init__.py", line 3, in <module>
webui-docker-auto-1  |     import gradio.components as components
webui-docker-auto-1  |   File "/usr/local/lib/python3.10/site-packages/gradio/components/__init__.py", line 1, in <module>
webui-docker-auto-1  |     from gradio.components.annotated_image import AnnotatedImage
webui-docker-auto-1  |   File "/usr/local/lib/python3.10/site-packages/gradio/components/annotated_image.py", line 12, in <module>
webui-docker-auto-1  |     from gradio import utils
webui-docker-auto-1  |   File "/usr/local/lib/python3.10/site-packages/gradio/utils.py", line 353, in <module>
webui-docker-auto-1  |     class AsyncRequest:
webui-docker-auto-1  |   File "/usr/local/lib/python3.10/site-packages/gradio/utils.py", line 372, in AsyncRequest
webui-docker-auto-1  |     client = httpx.AsyncClient()
webui-docker-auto-1  |   File "/usr/local/lib/python3.10/site-packages/httpx/_client.py", line 1397, in __init__
webui-docker-auto-1  |     self._transport = self._init_transport(
webui-docker-auto-1  |   File "/usr/local/lib/python3.10/site-packages/httpx/_client.py", line 1445, in _init_transport
webui-docker-auto-1  |     return AsyncHTTPTransport(
webui-docker-auto-1  |   File "/usr/local/lib/python3.10/site-packages/httpx/_transports/default.py", line 275, in __init__
webui-docker-auto-1  |     self._pool = httpcore.AsyncConnectionPool(
webui-docker-auto-1  | TypeError: AsyncConnectionPool.__init__() got an unexpected keyword argument 'socket_options'
webui-docker-auto-1 exited with code 1

Which UI

auto

Hardware / Software

  • OS: Ubuntu
  • OS version: Ubuntu 20.04.6 LTS
  • WSL version (if applicable):
  • Docker Version: 24.0.6
  • Docker compose version: v2.21.0
  • Repo version: I've tried both master and git checkout tags/8.1.0
  • RAM: 16 GB
  • GPU/VRAM: 23028MiB

Steps to Reproduce

git clone https://github.com/AbdBarho/stable-diffusion-webui-docker.git
cd stable-diffusion-webui-docker/
# instead of master branch I've also tried git checkout tags/8.1.0, same results
docker compose --profile download up --build
docker compose --profile auto up --build

Additional context

$ nvidia-smi
Sat Nov  4 15:14:53 2023
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.104.12             Driver Version: 535.104.12   CUDA Version: 12.2     |
|-----------------------------------------+----------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |         Memory-Usage | GPU-Util  Compute M. |
|                                         |                      |               MIG M. |
|=========================================+======================+======================|
|   0  NVIDIA A10G                    On  | 00000000:00:1E.0 Off |                    0 |
|  0%   28C    P8              16W / 300W |      2MiB / 23028MiB |      0%      Default |
|                                         |                      |                  N/A |
+-----------------------------------------+----------------------+----------------------+

+---------------------------------------------------------------------------------------+
| Processes:                                                                            |
|  GPU   GI   CI        PID   Type   Process name                            GPU Memory |
|        ID   ID                                                             Usage      |
|=======================================================================================|
|  No running processes found                                                           |
+---------------------------------------------------------------------------------------+
$ uname -a
Linux ip-xxx-xxx-xxx-xxx 5.15.0-1048-aws #53~20.04.1-Ubuntu SMP Wed Oct 4 16:44:20 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux

Using a Amazon g5.xlarge instance (NVIDIA A10G Tensor-Core-GPU) with an EC2 Deep Learning Base GPU AMI (Ubuntu 20.04) 20231026 (ami-0d134e01570c1e7b4) image.

By the way, trying the invoke UI doesn't work either for me, see #595 (comment)

Side note

On a completely different setup on a Google g2-standard-4 machine (Nvidia L4) with a "Deep Learning VM with CUDA 12.1 M112" (c0-deeplearning-common-cu121-v20230925-debian-11-py310) image, I've successfully started AUTOMATIC without Docker but using https://github.com/AUTOMATIC1111/stable-diffusion-webui directly via

export CUDAVERSION=121
pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu$CUDAVERSION && \
git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui && \
cd stable-diffusion-webui && \
git checkout tags/v1.6.0 && \
cd ~/stable-diffusion-webui/models/Stable-diffusion && \
# wget "https://civitai.com/api/download/models/130072" -O realisticVisionV51_v51VAE.safetensors && \
wget "https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.safetensors" -O "v1-5-pruned-emaonly.safetensors" && \
# wget "https://huggingface.co/runwayml/stable-diffusion-inpainting/resolve/main/sd-v1-5-inpainting.ckpt" -O "sd-v1-5-inpainting.ckpt" && \
cd ~/stable-diffusion-webui && \
wget "https://github.com/TencentARC/GFPGAN/releases/download/v1.3.4/GFPGANv1.4.pth" -O GFPGANv1.4.pth && \
cd ~/stable-diffusion-webui/ && \
python3 launch.py --listen

but sometimes on python3 launch.py I see the same error got an unexpected keyword argument 'socket_options', but according to the following issues on the https://github.com/AUTOMATIC1111/stable-diffusion-webui/ repo:

I could fix it as follows:

cd ~/stable-diffusion-webui/ && \
pip install -U pip && \
pip install -U httpcore && \
pip3 install httpx==0.24.1
# and then this works:
python3 launch.py --listen

But in this https://github.com/AbdBarho/stable-diffusion-webui-docker repo I don't know how to apply this.

@cyril23 cyril23 added the bug Something isn't working label Nov 4, 2023
@Firestorm7893
Copy link

Firestorm7893 commented Nov 4, 2023

I fixed it temporarily by editing the dokerfile present in services/AUTOMATIC111/

FROM alpine/git:2.36.2 as download

COPY clone.sh /clone.sh

RUN . /clone.sh taming-transformers https://github.com/CompVis/taming-transformers.git 24268930bf1dce879235a7fddd0b2355b84d7ea6 \
  && rm -rf data assets **/*.ipynb

RUN . /clone.sh stable-diffusion-stability-ai https://github.com/Stability-AI/stablediffusion.git 47b6b607fdd31875c9279cd2f4f16b92e4ea958e \
  && rm -rf assets data/**/*.png data/**/*.jpg data/**/*.gif

RUN . /clone.sh CodeFormer https://github.com/sczhou/CodeFormer.git c5b4593074ba6214284d6acd5f1719b6c5d739af \
  && rm -rf assets inputs

RUN . /clone.sh BLIP https://github.com/salesforce/BLIP.git 48211a1594f1321b00f14c9f7a5b4813144b2fb9
RUN . /clone.sh k-diffusion https://github.com/crowsonkb/k-diffusion.git c9fe758757e022f05ca5a53fa8fac28889e4f1cf
RUN . /clone.sh clip-interrogator https://github.com/pharmapsychotic/clip-interrogator 2486589f24165c8e3b303f84e9dbbea318df83e8
RUN . /clone.sh generative-models https://github.com/Stability-AI/generative-models 45c443b316737a4ab6e40413d7794a7f5657c19f


FROM alpine:3.17 as xformers
RUN apk add --no-cache aria2
RUN aria2c -x 5 --dir / --out wheel.whl 'https://github.com/AbdBarho/stable-diffusion-webui-docker/releases/download/6.0.0/xformers-0.0.21.dev544-cp310-cp310-manylinux2014_x86_64-pytorch201.whl'


FROM python:3.10.13-slim

ENV DEBIAN_FRONTEND=noninteractive PIP_PREFER_BINARY=1

RUN --mount=type=cache,target=/var/cache/apt \
  apt-get update && \
  # we need those
  apt-get install -y fonts-dejavu-core rsync git jq moreutils aria2 \
  # extensions needs those
  ffmpeg libglfw3-dev libgles2-mesa-dev pkg-config libcairo2 libcairo2-dev build-essential


RUN --mount=type=cache,target=/cache --mount=type=cache,target=/root/.cache/pip \
  aria2c -x 5 --dir /cache --out torch-2.0.1-cp310-cp310-linux_x86_64.whl -c \
  https://download.pytorch.org/whl/cu118/torch-2.0.1%2Bcu118-cp310-cp310-linux_x86_64.whl && \
  pip install /cache/torch-2.0.1-cp310-cp310-linux_x86_64.whl torchvision --index-url https://download.pytorch.org/whl/cu118


COPY requirements_versions.txt /requirements_versions.txt

RUN --mount=type=cache,target=/root/.cache/pip \
  git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui.git && \
  cd stable-diffusion-webui && \
  git reset --hard 20ae71faa8ef035c31aa3a410b707d792c8203a3 && \
  pip install -r /requirements_versions.txt

RUN --mount=type=cache,target=/root/.cache/pip  \
  --mount=type=bind,from=xformers,source=/wheel.whl,target=/xformers-0.0.21.dev544-cp310-cp310-manylinux2014_x86_64.whl \
  pip install /xformers-0.0.21.dev544-cp310-cp310-manylinux2014_x86_64.whl

ENV ROOT=/stable-diffusion-webui


COPY --from=download /repositories/ ${ROOT}/repositories/
RUN mkdir ${ROOT}/interrogate && cp ${ROOT}/repositories/clip-interrogator/data/* ${ROOT}/interrogate
RUN --mount=type=cache,target=/root/.cache/pip \
  pip install -r ${ROOT}/repositories/CodeFormer/requirements.txt

RUN --mount=type=cache,target=/root/.cache/pip \
  pip install pyngrok \
  git+https://github.com/TencentARC/GFPGAN.git@8d2447a2d918f8eba5a4a01463fd48e45126a379 \
  git+https://github.com/openai/CLIP.git@d50d76daa670286dd6cacf3bcd80b5e4823fc8e1 \
  git+https://github.com/mlfoundations/open_clip.git@bb6e834e9c70d9c27d0dc3ecedeebeaeb1ffad6b

# Note: don't update the sha of previous versions because the install will take forever
# instead, update the repo state in a later step

# TODO: either remove if fixed in A1111 (unlikely) or move to the top with other apt stuff
RUN apt-get -y install libgoogle-perftools-dev && apt-get clean
ENV LD_PRELOAD=libtcmalloc.so

ARG SHA=68f336bd994bed5442ad95bad6b6ad5564a5409a
RUN --mount=type=cache,target=/root/.cache/pip \
  cd stable-diffusion-webui && \
  git fetch && \
  git reset --hard ${SHA} && \
  pip install -r requirements_versions.txt \
  pip install -U pip && \
  pip install -U httpcore && \
  pip3 install httpx==0.24.1
COPY . /docker

RUN \
  python3 /docker/info.py ${ROOT}/modules/ui.py && \
  mv ${ROOT}/style.css ${ROOT}/user.css && \
  # one of the ugliest hacks I ever wrote \
  sed -i 's/in_app_dir = .*/in_app_dir = True/g' /usr/local/lib/python3.10/site-packages/gradio/routes.py && \
  git config --global --add safe.directory '*'

WORKDIR ${ROOT}
ENV NVIDIA_VISIBLE_DEVICES=all
ENV CLI_ARGS=""
EXPOSE 7860
ENTRYPOINT ["/docker/entrypoint.sh"]
CMD python -u webui.py --listen --port 7860 ${CLI_ARGS}

and by adding a file called requirements_versions.txt in that same folder. here's the content of the file

GitPython==3.1.32
Pillow==9.5.0
accelerate==0.21.0
basicsr==1.4.2
blendmodes==2022
clean-fid==0.1.35
einops==0.4.1
fastapi==0.94.0
gfpgan==1.3.8
gradio==3.41.2
httpx==0.24.1
httpcore==0.15
inflection==0.5.1
jsonmerge==1.8.0
kornia==0.6.7
lark==1.1.2
numpy==1.23.5
omegaconf==2.2.3
open-clip-torch==2.20.0
piexif==1.1.3
psutil==5.9.5
pytorch_lightning==1.9.4
realesrgan==0.3.0
resize-right==0.0.2
safetensors==0.3.1
scikit-image==0.21.0
timm==0.9.2
tomesd==0.1.3
torch
torchdiffeq==0.2.3
torchsde==0.2.5
transformers==4.30.2
httpx==0.24.1

this applies the fix you mentioned in your post :)

@cyril23
Copy link
Author

cyril23 commented Nov 4, 2023

thanks for the quick reply!, edit: because of your custom Dockerfile and the requirements_versions.txt you provided, I was able to open the GUI now, but get an error if I do any txt2img or img2img request on the v1-5-pruned-emaonly.ckpt (prompt: happy guy, the rest are stock settings)

webui-docker-auto-1  | Calculating sha256 for /stable-diffusion-webui/models/Stable-diffusion/v1-5-pruned-emaonly.ckpt: cc6cb27103417325ff94f52b7a5d2dde45a7515b25c255d8e396c90014281516
webui-docker-auto-1  | Loading weights [cc6cb27103] from /stable-diffusion-webui/models/Stable-diffusion/v1-5-pruned-emaonly.ckpt
webui-docker-auto-1  | Creating model from config: /stable-diffusion-webui/configs/v1-inference.yaml
webui-docker-auto-1  | LatentDiffusion: Running in eps-prediction mode
webui-docker-auto-1  | DiffusionWrapper has 859.52 M params.
webui-docker-auto-1  | Applying attention optimization: xformers... done.
webui-docker-auto-1  | Model loaded in 1.7s (create model: 0.5s, apply weights to model: 0.7s, apply half(): 0.3s, calculate empty prompt: 0.1s).
  0% 0/16 [00:00<?, ?it/s]
webui-docker-auto-1  | *** Error completing request
webui-docker-auto-1  | *** Arguments: ('task(ue0649ifuscybol)', 'happy guy', '', [], 16, 0, False, False, 1, 1, 7, -1.0, -1.0, 0, 0, 0, False, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, 0, '', '', [], <gradio.routes.Request object at 0x7f67fe8800a0>, 0, 'from modules.processing import process_images\n\np.width = 768\np.height = 768\np.batch_size = 2\np.steps = 10\n\nreturn process_images(p)', 2, False, False, 'positive', 'comma', 0, False, False, '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0) {}
webui-docker-auto-1  |     Traceback (most recent call last):
webui-docker-auto-1  |       File "/stable-diffusion-webui/modules/call_queue.py", line 58, in f
webui-docker-auto-1  |         res = list(func(*args, **kwargs))
webui-docker-auto-1  |       File "/stable-diffusion-webui/modules/call_queue.py", line 37, in f
webui-docker-auto-1  |         res = func(*args, **kwargs)
webui-docker-auto-1  |       File "/stable-diffusion-webui/modules/txt2img.py", line 62, in txt2img
webui-docker-auto-1  |         processed = processing.process_images(p)
webui-docker-auto-1  |       File "/stable-diffusion-webui/modules/processing.py", line 677, in process_images
webui-docker-auto-1  |         res = process_images_inner(p)
webui-docker-auto-1  |       File "/stable-diffusion-webui/modules/processing.py", line 794, in process_images_inner
webui-docker-auto-1  |         samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
webui-docker-auto-1  |       File "/stable-diffusion-webui/modules/processing.py", line 1054, in sample
webui-docker-auto-1  |         samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
webui-docker-auto-1  |       File "/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py", line 464, in sample
webui-docker-auto-1  |         samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
webui-docker-auto-1  |       File "/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py", line 303, in launch_sampling
webui-docker-auto-1  |         return func()
webui-docker-auto-1  |       File "/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py", line 464, in <lambda>
webui-docker-auto-1  |         samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
webui-docker-auto-1  |       File "/usr/local/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
webui-docker-auto-1  |         return func(*args, **kwargs)
webui-docker-auto-1  |       File "/stable-diffusion-webui/repositories/k-diffusion/k_diffusion/sampling.py", line 145, in sample_euler_ancestral
webui-docker-auto-1  |         denoised = model(x, sigmas[i] * s_in, **extra_args)
webui-docker-auto-1  |       File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
webui-docker-auto-1  |         return self._call_impl(*args, **kwargs)
webui-docker-auto-1  |       File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
webui-docker-auto-1  |         return forward_call(*args, **kwargs)
webui-docker-auto-1  |       File "/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py", line 189, in forward
webui-docker-auto-1  |         x_out[a:b] = self.inner_model(x_in[a:b], sigma_in[a:b], cond=make_condition_dict(subscript_cond(cond_in, a, b), image_cond_in[a:b]))
webui-docker-auto-1  |       File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
webui-docker-auto-1  |         return self._call_impl(*args, **kwargs)
webui-docker-auto-1  |       File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
webui-docker-auto-1  |         return forward_call(*args, **kwargs)
webui-docker-auto-1  |       File "/stable-diffusion-webui/repositories/k-diffusion/k_diffusion/external.py", line 112, in forward
webui-docker-auto-1  |         eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
webui-docker-auto-1  |       File "/stable-diffusion-webui/repositories/k-diffusion/k_diffusion/external.py", line 138, in get_eps
webui-docker-auto-1  |         return self.inner_model.apply_model(*args, **kwargs)
webui-docker-auto-1  |       File "/stable-diffusion-webui/modules/sd_hijack_utils.py", line 17, in <lambda>
webui-docker-auto-1  |         setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
webui-docker-auto-1  |       File "/stable-diffusion-webui/modules/sd_hijack_utils.py", line 28, in __call__
webui-docker-auto-1  |         return self.__orig_func(*args, **kwargs)
webui-docker-auto-1  |       File "/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 858, in apply_model
webui-docker-auto-1  |         x_recon = self.model(x_noisy, t, **cond)
webui-docker-auto-1  |       File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
webui-docker-auto-1  |         return self._call_impl(*args, **kwargs)
webui-docker-auto-1  |       File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1568, in _call_impl
webui-docker-auto-1  |         result = forward_call(*args, **kwargs)
webui-docker-auto-1  |       File "/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 1329, in forward
webui-docker-auto-1  |         out = self.diffusion_model(x, t, context=cc)
webui-docker-auto-1  |       File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
webui-docker-auto-1  |         return self._call_impl(*args, **kwargs)
webui-docker-auto-1  |       File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
webui-docker-auto-1  |         return forward_call(*args, **kwargs)
webui-docker-auto-1  |       File "/stable-diffusion-webui/modules/sd_unet.py", line 91, in UNetModel_forward
webui-docker-auto-1  |         return ldm.modules.diffusionmodules.openaimodel.copy_of_UNetModel_forward_for_webui(self, x, timesteps, context, *args, **kwargs)
webui-docker-auto-1  |       File "/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/modules/diffusionmodules/openaimodel.py", line 776, in forward
webui-docker-auto-1  |         h = module(h, emb, context)
webui-docker-auto-1  |       File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
webui-docker-auto-1  |         return self._call_impl(*args, **kwargs)
webui-docker-auto-1  |       File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
webui-docker-auto-1  |         return forward_call(*args, **kwargs)
webui-docker-auto-1  |       File "/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/modules/diffusionmodules/openaimodel.py", line 84, in forward
webui-docker-auto-1  |         x = layer(x, context)
webui-docker-auto-1  |       File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
webui-docker-auto-1  |         return self._call_impl(*args, **kwargs)
webui-docker-auto-1  |       File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
webui-docker-auto-1  |         return forward_call(*args, **kwargs)
webui-docker-auto-1  |       File "/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/modules/attention.py", line 324, in forward
webui-docker-auto-1  |         x = block(x, context=context[i])
webui-docker-auto-1  |       File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
webui-docker-auto-1  |         return self._call_impl(*args, **kwargs)
webui-docker-auto-1  |       File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
webui-docker-auto-1  |         return forward_call(*args, **kwargs)
webui-docker-auto-1  |       File "/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/modules/attention.py", line 259, in forward
webui-docker-auto-1  |         return checkpoint(self._forward, (x, context), self.parameters(), self.checkpoint)
webui-docker-auto-1  |       File "/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/modules/diffusionmodules/util.py", line 114, in checkpoint
webui-docker-auto-1  |         return CheckpointFunction.apply(func, len(inputs), *args)
webui-docker-auto-1  |       File "/usr/local/lib/python3.10/site-packages/torch/autograd/function.py", line 539, in apply
webui-docker-auto-1  |         return super().apply(*args, **kwargs)  # type: ignore[misc]
webui-docker-auto-1  |       File "/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/modules/diffusionmodules/util.py", line 129, in forward
webui-docker-auto-1  |         output_tensors = ctx.run_function(*ctx.input_tensors)
webui-docker-auto-1  |       File "/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/modules/attention.py", line 262, in _forward
webui-docker-auto-1  |         x = self.attn1(self.norm1(x), context=context if self.disable_self_attn else None) + x
webui-docker-auto-1  |       File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
webui-docker-auto-1  |         return self._call_impl(*args, **kwargs)
webui-docker-auto-1  |       File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
webui-docker-auto-1  |         return forward_call(*args, **kwargs)
webui-docker-auto-1  |       File "/stable-diffusion-webui/modules/sd_hijack_optimizations.py", line 489, in xformers_attention_forward
webui-docker-auto-1  |         out = xformers.ops.memory_efficient_attention(q, k, v, attn_bias=None, op=get_xformers_flash_attention_op(q, k, v))
webui-docker-auto-1  |       File "/usr/local/lib/python3.10/site-packages/xformers/ops/fmha/__init__.py", line 192, in memory_efficient_attention
webui-docker-auto-1  |         return _memory_efficient_attention(
webui-docker-auto-1  |       File "/usr/local/lib/python3.10/site-packages/xformers/ops/fmha/__init__.py", line 290, in _memory_efficient_attention
webui-docker-auto-1  |         return _memory_efficient_attention_forward(
webui-docker-auto-1  |       File "/usr/local/lib/python3.10/site-packages/xformers/ops/fmha/__init__.py", line 306, in _memory_efficient_attention_forward
webui-docker-auto-1  |         op = _dispatch_fw(inp)
webui-docker-auto-1  |       File "/usr/local/lib/python3.10/site-packages/xformers/ops/fmha/dispatch.py", line 94, in _dispatch_fw
webui-docker-auto-1  |         return _run_priority_list(
webui-docker-auto-1  |       File "/usr/local/lib/python3.10/site-packages/xformers/ops/fmha/dispatch.py", line 69, in _run_priority_list
webui-docker-auto-1  |         raise NotImplementedError(msg)
webui-docker-auto-1  |     NotImplementedError: No operator found for `memory_efficient_attention_forward` with inputs:
webui-docker-auto-1  |          query       : shape=(1, 4096, 8, 40) (torch.float16)
webui-docker-auto-1  |          key         : shape=(1, 4096, 8, 40) (torch.float16)
webui-docker-auto-1  |          value       : shape=(1, 4096, 8, 40) (torch.float16)
webui-docker-auto-1  |          attn_bias   : <class 'NoneType'>
webui-docker-auto-1  |          p           : 0.0
webui-docker-auto-1  |     `flshattF` is not supported because:
webui-docker-auto-1  |         xFormers wasn't build with CUDA support
webui-docker-auto-1  |         Operator wasn't built - see `python -m xformers.info` for more info
webui-docker-auto-1  |     `tritonflashattF` is not supported because:
webui-docker-auto-1  |         xFormers wasn't build with CUDA support
webui-docker-auto-1  |         requires A100 GPU
webui-docker-auto-1  |         Only work on pre-MLIR triton for now
webui-docker-auto-1  |     `cutlassF` is not supported because:
webui-docker-auto-1  |         xFormers wasn't build with CUDA support
webui-docker-auto-1  |         Operator wasn't built - see `python -m xformers.info` for more info
webui-docker-auto-1  |     `smallkF` is not supported because:
webui-docker-auto-1  |         xFormers wasn't build with CUDA support
webui-docker-auto-1  |         dtype=torch.float16 (supported: {torch.float32})
webui-docker-auto-1  |         max(query.shape[-1] != value.shape[-1]) > 32
webui-docker-auto-1  |         Operator wasn't built - see `python -m xformers.info` for more info
webui-docker-auto-1  |         unsupported embed per head: 40
webui-docker-auto-1  |
webui-docker-auto-1  | ---

During startup I see a warning too:

~/stable-diffusion-webui-docker$ docker compose --profile auto up --build
[+] Building 1.0s (34/34) FINISHED                                                                                                      docker:default
 => [auto internal] load build definition from Dockerfile                                                                                         0.0s
 => => transferring dockerfile: 4.43kB                                                                                                            0.0s
 => [auto internal] load .dockerignore                                                                                                            0.0s
 => => transferring context: 2B                                                                                                                   0.0s
 => [auto internal] load metadata for docker.io/library/alpine:3.17                                                                               0.9s
 => [auto internal] load metadata for docker.io/library/python:3.10.13-slim                                                                       0.9s
 => [auto internal] load metadata for docker.io/alpine/git:2.36.2                                                                                 0.9s
 => [auto download 1/9] FROM docker.io/alpine/git:2.36.2@sha256:ec491c893597b68c92b88023827faa771772cfd5e106b76c713fa5e1c75dea84                  0.0s
 => [auto internal] load build context                                                                                                            0.0s
 => => transferring context: 194B                                                                                                                 0.0s
 => [auto stage-2  1/15] FROM docker.io/library/python:3.10.13-slim@sha256:3c9182c6498d7de6044be04fb1785ba3a04f953d515d45e5007e8be1c15fdd34       0.0s
 => [auto xformers 1/3] FROM docker.io/library/alpine:3.17@sha256:f71a5f071694a785e064f05fed657bf8277f1b2113a8ed70c90ad486d6ee54dc                0.0s
 => CACHED [auto stage-2  2/15] RUN --mount=type=cache,target=/var/cache/apt   apt-get update &&   apt-get install -y fonts-dejavu-core rsync gi  0.0s
 => CACHED [auto stage-2  3/15] RUN --mount=type=cache,target=/cache --mount=type=cache,target=/root/.cache/pip   aria2c -x 5 --dir /cache --out  0.0s
 => CACHED [auto stage-2  4/15] COPY requirements_versions.txt /requirements_versions.txt                                                         0.0s
 => CACHED [auto stage-2  5/15] RUN --mount=type=cache,target=/root/.cache/pip   git clone https://github.com/AUTOMATIC1111/stable-diffusion-web  0.0s
 => CACHED [auto xformers 2/3] RUN apk add --no-cache aria2                                                                                       0.0s
 => CACHED [auto xformers 3/3] RUN aria2c -x 5 --dir / --out wheel.whl 'https://github.com/AbdBarho/stable-diffusion-webui-docker/releases/downl  0.0s
 => CACHED [auto stage-2  6/15] RUN --mount=type=cache,target=/root/.cache/pip    --mount=type=bind,from=xformers,source=/wheel.whl,target=/xfor  0.0s
 => CACHED [auto download 2/9] COPY clone.sh /clone.sh                                                                                            0.0s
 => CACHED [auto download 3/9] RUN . /clone.sh taming-transformers https://github.com/CompVis/taming-transformers.git 24268930bf1dce879235a7fddd  0.0s
 => CACHED [auto download 4/9] RUN . /clone.sh stable-diffusion-stability-ai https://github.com/Stability-AI/stablediffusion.git 47b6b607fdd3187  0.0s
 => CACHED [auto download 5/9] RUN . /clone.sh CodeFormer https://github.com/sczhou/CodeFormer.git c5b4593074ba6214284d6acd5f1719b6c5d739af   &&  0.0s
 => CACHED [auto download 6/9] RUN . /clone.sh BLIP https://github.com/salesforce/BLIP.git 48211a1594f1321b00f14c9f7a5b4813144b2fb9               0.0s
 => CACHED [auto download 7/9] RUN . /clone.sh k-diffusion https://github.com/crowsonkb/k-diffusion.git c9fe758757e022f05ca5a53fa8fac28889e4f1cf  0.0s
 => CACHED [auto download 8/9] RUN . /clone.sh clip-interrogator https://github.com/pharmapsychotic/clip-interrogator 2486589f24165c8e3b303f84e9  0.0s
 => CACHED [auto download 9/9] RUN . /clone.sh generative-models https://github.com/Stability-AI/generative-models 45c443b316737a4ab6e40413d7794  0.0s
 => CACHED [auto stage-2  7/15] COPY --from=download /repositories/ /stable-diffusion-webui/repositories/                                         0.0s
 => CACHED [auto stage-2  8/15] RUN mkdir /stable-diffusion-webui/interrogate && cp /stable-diffusion-webui/repositories/clip-interrogator/data/  0.0s
 => CACHED [auto stage-2  9/15] RUN --mount=type=cache,target=/root/.cache/pip   pip install -r /stable-diffusion-webui/repositories/CodeFormer/  0.0s
 => CACHED [auto stage-2 10/15] RUN --mount=type=cache,target=/root/.cache/pip   pip install pyngrok   git+https://github.com/TencentARC/GFPGAN.  0.0s
 => CACHED [auto stage-2 11/15] RUN apt-get -y install libgoogle-perftools-dev && apt-get clean                                                   0.0s
 => CACHED [auto stage-2 12/15] RUN --mount=type=cache,target=/root/.cache/pip   cd stable-diffusion-webui &&   git fetch &&   git reset --hard   0.0s
 => CACHED [auto stage-2 13/15] COPY . /docker                                                                                                    0.0s
 => CACHED [auto stage-2 14/15] RUN   python3 /docker/info.py /stable-diffusion-webui/modules/ui.py &&   mv /stable-diffusion-webui/style.css /s  0.0s
 => CACHED [auto stage-2 15/15] WORKDIR /stable-diffusion-webui                                                                                   0.0s
 => [auto] exporting to image                                                                                                                     0.0s
 => => exporting layers                                                                                                                           0.0s
 => => writing image sha256:466f66db6474ac74079e4ab400e19cf6a7a3ced30c9200047ddde59f42f440f1                                                      0.0s
 => => naming to docker.io/library/sd-auto:67                                                                                                     0.0s
[+] Running 1/0
 ✔ Container webui-docker-auto-1  Created                                                                                                         0.0s
Attaching to webui-docker-auto-1
webui-docker-auto-1  | Mounted .cache
webui-docker-auto-1  | Mounted config_states
webui-docker-auto-1  | Mounted .cache
webui-docker-auto-1  | Mounted embeddings
webui-docker-auto-1  | Mounted config.json
webui-docker-auto-1  | Mounted models
webui-docker-auto-1  | Mounted styles.csv
webui-docker-auto-1  | Mounted ui-config.json
webui-docker-auto-1  | Mounted extensions
webui-docker-auto-1  | Installing extension dependencies (if any)
webui-docker-auto-1  | WARNING[XFORMERS]: xFormers can't load C++/CUDA extensions. xFormers was built for:
webui-docker-auto-1  |     PyTorch 2.0.1+cu118 with CUDA 1108 (you have 2.1.0+cu121)
webui-docker-auto-1  |     Python  3.10.11 (you have 3.10.13)
webui-docker-auto-1  |   Please reinstall xformers (see https://github.com/facebookresearch/xformers#installing-xformers)
webui-docker-auto-1  |   Memory-efficient attention, SwiGLU, sparse and more won't be available.
webui-docker-auto-1  |   Set XFORMERS_MORE_DETAILS=1 for more details
webui-docker-auto-1  | Loading weights [cc6cb27103] from /stable-diffusion-webui/models/Stable-diffusion/v1-5-pruned-emaonly.ckpt
webui-docker-auto-1  | Running on local URL:  http://0.0.0.0:7860
webui-docker-auto-1  |
webui-docker-auto-1  | To create a public link, set `share=True` in `launch()`.
webui-docker-auto-1  | Startup time: 13.2s (import torch: 6.3s, import gradio: 1.2s, setup paths: 2.9s, other imports: 1.2s, setup codeformer: 0.1s, load scripts: 0.5s, create ui: 0.7s, gradio launch: 0.1s, add APIs: 0.1s).
webui-docker-auto-1  | Creating model from config: /stable-diffusion-webui/configs/v1-inference.yaml
webui-docker-auto-1  | LatentDiffusion: Running in eps-prediction mode
webui-docker-auto-1  | DiffusionWrapper has 859.52 M params.
webui-docker-auto-1  | Applying attention optimization: xformers... done.
webui-docker-auto-1  | Model loaded in 11.5s (load weights from disk: 9.6s, create model: 0.5s, apply weights to model: 0.6s, apply half(): 0.2s, calculate empty prompt: 0.5s).

(edit: removed duplicate citations)

There are some issues about this:

Maybe my CUDA 121 (or CUDA Version: 12.2 according to nvidia-smi) or my PyTorch version is too new?
According to https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Xformers I can try to run SD without xformers as it only seems to be a performance enhancement.

I've disabled xformers by editing docker-compose.yml and removing --xformers from the line here

- CLI_ARGS=--allow-code --medvram --xformers --enable-insecure-extension-access --api

Now text2img, img2img etc. fully work! Not sure if I should open a new issue for this, since I'm running your customized Dockerfile

@Herbert-Zheng
Copy link

Herbert-Zheng commented Nov 7, 2023

I believe this issue has been fixed. We only needs to change here:

git reset --hard 5ef669de080814067961f28357256e8fe27544f4 && \

to the latest commit of AUTOMATIC111 4afaaf8a020c1df457bcf7250cb1c7f609699fa7 which add the httpx==0.24.1 in the requirement file: #13839

@qyvlik
Copy link

qyvlik commented Nov 9, 2023

See AUTOMATIC1111/stable-diffusion-webui#13847 (comment)

@coopbri
Copy link

coopbri commented Nov 10, 2023

Thanks all, confirming that updating services/AUTOMATIC1111/Dockerfile to use that commit (4afaaf8) with the httpx pin worked for me.

@cyril23
Copy link
Author

cyril23 commented Nov 11, 2023

Thanks! I can confirm it, too. It works like this, on a fresh Amazon EC2 instance (edit: still using an Amazon g5.xlarge instance (NVIDIA A10G Tensor-Core-GPU) with an EC2 Deep Learning Base GPU AMI (Ubuntu 20.04) 20231026 (ami-0d134e01570c1e7b4) image):

git clone https://github.com/AbdBarho/stable-diffusion-webui-docker.git && \
cd stable-diffusion-webui-docker/ && \
sed -i 's/5ef669de080814067961f28357256e8fe27544f4/4afaaf8a020c1df457bcf7250cb1c7f609699fa7/g' services/AUTOMATIC1111/Dockerfile && \
docker compose --profile download up --build
    # Amazon EC2 only: need to login to docker to avoid error "failed to authorize: failed to fetch anonymous token"
    aws configure
    # enter access token credentials, see AWS iam
    aws ecr get-login-password --region {your region, e.g. eu-central-1} | docker login --username AWS --password-stdin {aws domain of that region, e.g. 763104351884.dkr.ecr.eu-central-1.amazonaws.com}
docker compose --profile auto up --build

I left a comment on the related PR here #609 (comment)

I think I can close this here now

@cyril23 cyril23 closed this as completed Nov 11, 2023
@AbdBarho
Copy link
Owner

Thank you for solving this, and sorry for coming so late to this.

I did some updates in #610. Would be great if anyone could test the new version and re-open this issue if the problem remains.

@coopbri
Copy link

coopbri commented Nov 13, 2023

Thanks a lot @AbdBarho, testing now :)

EDIT: confirming the PR build works well for me. Also, good catch on other things like the leftover sygil references 🚀

@horoabb
Copy link

horoabb commented Nov 16, 2023

I updated to the latest version, but the same error still occurs."

#TypeError: AsyncConnectionPool.init() got an unexpected keyword argument 'socket_options

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

7 participants