-
Notifications
You must be signed in to change notification settings - Fork 1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support AMD for auto1111 #362
Changes from all commits
9f574d9
e6b110c
40b376f
2b3a382
0fedf81
bf1c9e3
1dc203c
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,65 +1,109 @@ | ||
version: '3.9' | ||
|
||
x-base_service: &base_service | ||
ports: | ||
- "7860:7860" | ||
volumes: | ||
- &v1 ./data:/data | ||
- &v2 ./output:/output | ||
stop_signal: SIGINT | ||
deploy: | ||
resources: | ||
reservations: | ||
devices: | ||
- driver: nvidia | ||
device_ids: ['0'] | ||
capabilities: [gpu] | ||
x-base_service: | ||
&base_service | ||
ports: | ||
- "7860:7860" | ||
volumes: | ||
- &v1 ./data:/data | ||
- &v2 ./output:/output | ||
stop_signal: SIGINT | ||
deploy: | ||
resources: | ||
reservations: | ||
devices: | ||
- driver: nvidia | ||
device_ids: [ '0' ] | ||
capabilities: [ gpu ] | ||
|
||
x-base_service_amd: | ||
&base_service_amd | ||
ports: | ||
- "7860:7860" | ||
volumes: | ||
- &v1 ./data:/data | ||
- &v2 ./output:/output | ||
stop_signal: SIGINT | ||
group_add: | ||
- video | ||
devices: | ||
- "/dev/dri" | ||
- "/dev/kfd" | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I was looking at getting this running within a docker container on windows but it seems like this does not exist on WSL yet and fails when I get to the last stage of running it.
"/dev/kfd: the main compute interface shared by all GPUs" from https://rocmdocs.amd.com/_/downloads/en/latest/pdf/ I might try getting this working on linux and open another merge request. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. so it's unavailable for wsl + docker to use AMD gpus yet, right? @mpitropov any updates? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I was unable to get it working for wsl + docker on AMD. I made my own version of this branch for linux and AMD for AUTOMATIC111 by looking at their repository. I don't know a clean way of creating the code such that I should make a merge request since there are no if statements in docker files to run AMD or NVidia specific code. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @mtthw-meyer any thoughts? can your docker access the |
||
|
||
name: webui-docker | ||
|
||
services: | ||
download: | ||
build: ./services/download/ | ||
profiles: ["download"] | ||
profiles: [ "download" ] | ||
volumes: | ||
- *v1 | ||
|
||
auto: &automatic | ||
auto: | ||
&automatic | ||
<<: *base_service | ||
profiles: ["auto"] | ||
profiles: [ "auto" ] | ||
build: ./services/AUTOMATIC1111 | ||
image: sd-auto:51 | ||
environment: | ||
- CLI_ARGS=--allow-code --medvram --xformers --enable-insecure-extension-access --api | ||
- CLI_ARGS=--allow-code --medvram --enable-insecure-extension-access --api | ||
|
||
auto-amd: | ||
&automatic | ||
<<: *base_service_amd | ||
profiles: [ "auto-amd" ] | ||
build: ./services/AUTOMATIC1111-AMD | ||
image: sd-auto:48 | ||
environment: | ||
- CLI_ARGS=--allow-code --medvram --no-half --precision full --enable-insecure-extension-access --api | ||
|
||
auto-cpu: | ||
<<: *automatic | ||
profiles: ["auto-cpu"] | ||
profiles: [ "auto-cpu" ] | ||
deploy: {} | ||
environment: | ||
- CLI_ARGS=--no-half --precision full --allow-code --enable-insecure-extension-access --api | ||
|
||
invoke: | ||
<<: *base_service | ||
profiles: ["invoke"] | ||
profiles: [ "invoke" ] | ||
build: ./services/invoke/ | ||
image: sd-invoke:26 | ||
environment: | ||
- PRELOAD=true | ||
- CLI_ARGS= | ||
|
||
invoke-amd: | ||
<<: *base_service_amd | ||
profiles: [ "invoke-amd" ] | ||
build: ./services/invoke-AMD/ | ||
image: sd-invoke:26 | ||
environment: | ||
- PRELOAD=true | ||
- CLI_ARGS= | ||
|
||
sygil: &sygil | ||
sygil: | ||
&sygil | ||
<<: *base_service | ||
profiles: ["sygil"] | ||
profiles: [ "sygil" ] | ||
build: ./services/sygil/ | ||
image: sd-sygil:16 | ||
environment: | ||
- CLI_ARGS=--optimized-turbo | ||
- USE_STREAMLIT=0 | ||
|
||
sygil-amd: | ||
&sygil | ||
<<: *base_service_amd | ||
profiles: [ "sygil-amd" ] | ||
build: ./services/sygil-AMD/ | ||
image: sd-sygil:16 | ||
environment: | ||
- CLI_ARGS=--optimized-turbo | ||
- USE_STREAMLIT=0 | ||
|
||
sygil-sl: | ||
<<: *sygil | ||
profiles: ["sygil-sl"] | ||
profiles: [ "sygil-sl" ] | ||
environment: | ||
- USE_STREAMLIT=1 |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,93 @@ | ||
# syntax=docker/dockerfile:1 | ||
|
||
FROM alpine/git:2.36.2 as download | ||
|
||
SHELL ["/bin/sh", "-ceuxo", "pipefail"] | ||
|
||
RUN <<EOF | ||
cat <<'EOE' > /clone.sh | ||
mkdir -p repositories/"$1" && cd repositories/"$1" && git init && git remote add origin "$2" && git fetch origin "$3" --depth=1 && git reset --hard "$3" && rm -rf .git | ||
EOE | ||
EOF | ||
|
||
RUN . /clone.sh taming-transformers https://github.com/CompVis/taming-transformers.git 24268930bf1dce879235a7fddd0b2355b84d7ea6 \ | ||
&& rm -rf data assets **/*.ipynb | ||
|
||
RUN . /clone.sh stable-diffusion-stability-ai https://github.com/Stability-AI/stablediffusion.git 47b6b607fdd31875c9279cd2f4f16b92e4ea958e \ | ||
&& rm -rf assets data/**/*.png data/**/*.jpg data/**/*.gif | ||
|
||
RUN . /clone.sh CodeFormer https://github.com/sczhou/CodeFormer.git c5b4593074ba6214284d6acd5f1719b6c5d739af \ | ||
&& rm -rf assets inputs | ||
|
||
RUN . /clone.sh BLIP https://github.com/salesforce/BLIP.git 48211a1594f1321b00f14c9f7a5b4813144b2fb9 | ||
RUN . /clone.sh k-diffusion https://github.com/crowsonkb/k-diffusion.git 5b3af030dd83e0297272d861c19477735d0317ec | ||
RUN . /clone.sh clip-interrogator https://github.com/pharmapsychotic/clip-interrogator 2486589f24165c8e3b303f84e9dbbea318df83e8 | ||
|
||
|
||
FROM python:3.10.9-slim | ||
|
||
SHELL ["/bin/bash", "-ceuxo", "pipefail"] | ||
|
||
ENV DEBIAN_FRONTEND=noninteractive PIP_PREFER_BINARY=1 | ||
|
||
RUN PIP_NO_CACHE_DIR=1 pip install torch torchvision --extra-index-url https://download.pytorch.org/whl/rocm5.2 | ||
|
||
RUN apt-get update && apt install fonts-dejavu-core rsync git jq moreutils bash -y && apt-get clean | ||
|
||
|
||
RUN --mount=type=cache,target=/root/.cache/pip <<EOF | ||
git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui.git | ||
cd stable-diffusion-webui | ||
git reset --hard d7aec59c4eb02f723b3d55c6f927a42e97acd679 | ||
pip install -r requirements_versions.txt | ||
EOF | ||
|
||
ENV ROOT=/stable-diffusion-webui | ||
|
||
|
||
COPY --from=download /git/ ${ROOT} | ||
RUN mkdir ${ROOT}/interrogate && cp ${ROOT}/repositories/clip-interrogator/data/* ${ROOT}/interrogate | ||
RUN --mount=type=cache,target=/root/.cache/pip \ | ||
pip install -r ${ROOT}/repositories/CodeFormer/requirements.txt | ||
|
||
RUN --mount=type=cache,target=/root/.cache/pip \ | ||
pip install pyngrok \ | ||
git+https://github.com/TencentARC/GFPGAN.git@8d2447a2d918f8eba5a4a01463fd48e45126a379 \ | ||
git+https://github.com/openai/CLIP.git@d50d76daa670286dd6cacf3bcd80b5e4823fc8e1 \ | ||
git+https://github.com/mlfoundations/open_clip.git@bb6e834e9c70d9c27d0dc3ecedeebeaeb1ffad6b | ||
|
||
# Note: don't update the sha of previous versions because the install will take forever | ||
# instead, update the repo state in a later step | ||
|
||
# TODO: either remove if fixed in A1111 (unlikely) or move to the top with other apt stuff | ||
RUN apt-get -y install libgoogle-perftools-dev && apt-get clean | ||
ENV LD_PRELOAD=libtcmalloc.so | ||
|
||
ARG SHA=0cc0ee1bcb4c24a8c9715f66cede06601bfc00c8 | ||
RUN --mount=type=cache,target=/root/.cache/pip <<EOF | ||
cd stable-diffusion-webui | ||
git fetch | ||
git reset --hard ${SHA} | ||
pip install -r requirements_versions.txt | ||
EOF | ||
|
||
RUN --mount=type=cache,target=/root/.cache/pip pip install -U opencv-python-headless | ||
|
||
COPY . /docker | ||
|
||
RUN <<EOF | ||
python3 /docker/info.py ${ROOT}/modules/ui.py | ||
mv ${ROOT}/style.css ${ROOT}/user.css | ||
# one of the ugliest hacks I ever wrote | ||
sed -i 's/in_app_dir = .*/in_app_dir = True/g' /usr/local/lib/python3.10/site-packages/gradio/routes.py | ||
EOF | ||
|
||
WORKDIR ${ROOT} | ||
ENV CLI_ARGS="" | ||
EXPOSE 7860 | ||
ENTRYPOINT ["/docker/entrypoint.sh"] | ||
|
||
# Depending on your actual GPU you may want to comment this out. | ||
# Without this you may get the error "hipErrorNoBinaryForGpu: Unable to find code object for all current devices!" | ||
ENV HSA_OVERRIDE_GFX_VERSION=10.3.0 | ||
CMD python -u webui.py --listen --port 7860 ${CLI_ARGS} |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,10 @@ | ||
{ | ||
"outdir_samples": "", | ||
"outdir_txt2img_samples": "/output/txt2img", | ||
"outdir_img2img_samples": "/output/img2img", | ||
"outdir_extras_samples": "/output/extras", | ||
"outdir_txt2img_grids": "/output/txt2img-grids", | ||
"outdir_img2img_grids": "/output/img2img-grids", | ||
"outdir_save": "/output/saved", | ||
"font": "DejaVuSans.ttf" | ||
} |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,65 @@ | ||
#!/bin/bash | ||
|
||
set -Eeuo pipefail | ||
|
||
# TODO: move all mkdir -p ? | ||
mkdir -p /data/config/auto/scripts/ | ||
# mount scripts individually | ||
find "${ROOT}/scripts/" -maxdepth 1 -type l -delete | ||
cp -vrfTs /data/config/auto/scripts/ "${ROOT}/scripts/" | ||
|
||
cp -n /docker/config.json /data/config/auto/config.json | ||
jq '. * input' /data/config/auto/config.json /docker/config.json | sponge /data/config/auto/config.json | ||
|
||
if [ ! -f /data/config/auto/ui-config.json ]; then | ||
echo '{}' >/data/config/auto/ui-config.json | ||
fi | ||
|
||
declare -A MOUNTS | ||
|
||
MOUNTS["/root/.cache"]="/data/.cache" | ||
|
||
# main | ||
MOUNTS["${ROOT}/models/Stable-diffusion"]="/data/StableDiffusion" | ||
MOUNTS["${ROOT}/models/VAE"]="/data/VAE" | ||
MOUNTS["${ROOT}/models/Codeformer"]="/data/Codeformer" | ||
MOUNTS["${ROOT}/models/GFPGAN"]="/data/GFPGAN" | ||
MOUNTS["${ROOT}/models/ESRGAN"]="/data/ESRGAN" | ||
MOUNTS["${ROOT}/models/BSRGAN"]="/data/BSRGAN" | ||
MOUNTS["${ROOT}/models/RealESRGAN"]="/data/RealESRGAN" | ||
MOUNTS["${ROOT}/models/SwinIR"]="/data/SwinIR" | ||
MOUNTS["${ROOT}/models/ScuNET"]="/data/ScuNET" | ||
MOUNTS["${ROOT}/models/LDSR"]="/data/LDSR" | ||
MOUNTS["${ROOT}/models/hypernetworks"]="/data/Hypernetworks" | ||
MOUNTS["${ROOT}/models/torch_deepdanbooru"]="/data/Deepdanbooru" | ||
MOUNTS["${ROOT}/models/BLIP"]="/data/BLIP" | ||
MOUNTS["${ROOT}/models/midas"]="/data/MiDaS" | ||
MOUNTS["${ROOT}/models/Lora"]="/data/Lora" | ||
|
||
MOUNTS["${ROOT}/embeddings"]="/data/embeddings" | ||
MOUNTS["${ROOT}/config.json"]="/data/config/auto/config.json" | ||
MOUNTS["${ROOT}/ui-config.json"]="/data/config/auto/ui-config.json" | ||
MOUNTS["${ROOT}/extensions"]="/data/config/auto/extensions" | ||
|
||
# extra hacks | ||
MOUNTS["${ROOT}/repositories/CodeFormer/weights/facelib"]="/data/.cache" | ||
|
||
for to_path in "${!MOUNTS[@]}"; do | ||
set -Eeuo pipefail | ||
from_path="${MOUNTS[${to_path}]}" | ||
rm -rf "${to_path}" | ||
if [ ! -f "$from_path" ]; then | ||
mkdir -vp "$from_path" | ||
fi | ||
mkdir -vp "$(dirname "${to_path}")" | ||
ln -sT "${from_path}" "${to_path}" | ||
echo Mounted $(basename "${from_path}") | ||
done | ||
|
||
if [ -f "/data/config/auto/startup.sh" ]; then | ||
pushd ${ROOT} | ||
. /data/config/auto/startup.sh | ||
popd | ||
fi | ||
|
||
exec "$@" |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,14 @@ | ||
import sys | ||
from pathlib import Path | ||
|
||
file = Path(sys.argv[1]) | ||
file.write_text( | ||
file.read_text()\ | ||
.replace(' return demo', """ | ||
with demo: | ||
gr.Markdown( | ||
'Created by [AUTOMATIC1111 / stable-diffusion-webui-docker](https://github.com/AbdBarho/stable-diffusion-webui-docker/)' | ||
) | ||
return demo | ||
""", 1) | ||
) |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,71 @@ | ||
# syntax=docker/dockerfile:1 | ||
|
||
FROM alpine:3.17 as xformers | ||
RUN apk add --no-cache aria2 | ||
RUN aria2c -x 5 --dir / --out wheel.whl 'https://github.com/AbdBarho/stable-diffusion-webui-docker/releases/download/5.0.0/xformers-0.0.17.dev449-cp310-cp310-manylinux2014_x86_64.whl' | ||
|
||
|
||
|
||
FROM python:3.10-slim | ||
SHELL ["/bin/bash", "-ceuxo", "pipefail"] | ||
|
||
ENV DEBIAN_FRONTEND=noninteractive PIP_EXISTS_ACTION=w PIP_PREFER_BINARY=1 | ||
|
||
|
||
RUN --mount=type=cache,target=/root/.cache/pip pip install torch torchvision --extra-index-url https://download.pytorch.org/whl/rocm5.2 | ||
|
||
RUN apt-get update && apt-get install git -y && apt-get clean | ||
|
||
RUN git clone https://github.com/invoke-ai/InvokeAI.git /stable-diffusion | ||
|
||
WORKDIR /stable-diffusion | ||
|
||
RUN --mount=type=cache,target=/root/.cache/pip <<EOF | ||
git reset --hard f232068ab89bd80e4f5f3133dcdb62ea78f1d0f7 | ||
git config --global http.postBuffer 1048576000 | ||
egrep -v '^-e .' environments-and-requirements/requirements-lin-cuda.txt > req.txt | ||
pip install -r req.txt | ||
rm req.txt | ||
EOF | ||
|
||
|
||
# patch match: | ||
# https://github.com/invoke-ai/InvokeAI/blob/main/docs/installation/INSTALL_PATCHMATCH.md | ||
RUN <<EOF | ||
apt-get update | ||
# apt-get install build-essential python3-opencv libopencv-dev -y | ||
apt-get install make g++ libopencv-dev -y | ||
apt-get clean | ||
cd /usr/lib/x86_64-linux-gnu/pkgconfig/ | ||
ln -sf opencv4.pc opencv.pc | ||
EOF | ||
|
||
|
||
ARG BRANCH=main SHA=6e0c6d9cc9f6bdbdefc4b9e94bc1ccde1b04aa42 | ||
RUN --mount=type=cache,target=/root/.cache/pip <<EOF | ||
git fetch | ||
git reset --hard | ||
git checkout ${BRANCH} | ||
git reset --hard ${SHA} | ||
pip install . | ||
EOF | ||
|
||
|
||
RUN --mount=type=cache,target=/root/.cache/pip \ | ||
--mount=type=bind,from=xformers,source=/wheel.whl,target=/xformers-0.0.15-cp310-cp310-linux_x86_64.whl \ | ||
pip install -U opencv-python-headless huggingface_hub triton /xformers-0.0.15-cp310-cp310-linux_x86_64.whl && \ | ||
python3 -c "from patchmatch import patch_match" | ||
|
||
|
||
RUN touch invokeai.init | ||
COPY . /docker/ | ||
|
||
|
||
ENV PYTHONUNBUFFERED=1 ROOT=/stable-diffusion PYTHONPATH="${PYTHONPATH}:${ROOT}" PRELOAD=false CLI_ARGS="" HF_HOME=/root/.cache/huggingface | ||
EXPOSE 7860 | ||
ENTRYPOINT ["/docker/entrypoint.sh"] | ||
|
||
# Depending on your actual GPU you may want to comment this out. | ||
# Without this you may get the error "hipErrorNoBinaryForGpu: Unable to find code object for all current devices!" | ||
ENV HSA_OVERRIDE_GFX_VERSION=10.3.0 | ||
CMD invokeai --web --host 0.0.0.0 --port 7860 --config /docker/models.yaml --root_dir ${ROOT} --outdir /output/invoke ${CLI_ARGS} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this does not work on Mac
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@zthxxx linux only, I don't think this works for mac or windows either?
If someone actually figures out how to do this AMD setup with docker in windows, that would be really really awesome.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No idea, no mac or windows handy to try with either.