Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docker container support #7946

Closed
wants to merge 75 commits into from
Closed

docker container support #7946

wants to merge 75 commits into from

Conversation

emsi
Copy link

@emsi emsi commented Feb 20, 2023

This PR adds docker container support.

Having a docker container with the UI might be useful e.g. for deployment testing or experimenting with untrusted models or scripts. Additional utilities can make adding an SSL suport with automatic certificate creation a breez too (hints are provided in the README.md file).

Due to technical limitations only Nvidia acceleration is supported via Nvidia docker runtime atm.


Additionally there's GitHub action that automatically builds and pushes this to docker hub (docker.io registry) under: https://hub.docker.com/r/emsi/stable-diffusion-webui

Keep in mind though that aforementioned image is not meant to be just docker pulled and docker run :)
You have to clone https://github.com/emsi/stable-diffusion-webui first, enter the stable-diffusion-webui/docker directory and only then run:

  1. docker compose pull and then
  2. ./run.sh.

This is necessary as this container uses host volumes binding the path to stable-diffusion-webui directory into container. It's meant for interoperability: you can update the image and all your local changes should remain intact, also you can update the local source with git pull and (unless there are some breaking changes) there's no need update the container image.

@martinobettucci
Copy link

martinobettucci commented Feb 20, 2023

Just use our fork and you need no codebase adaptations: https://github.com/P2Enjoy/stable-diffusion-docker @AUTOMATIC1111 @emsi

@emsi
Copy link
Author

emsi commented Feb 20, 2023

Just use our fork and you need no codebase adaptations: https://github.com/P2Enjoy/stable-diffusion-docker @AUTOMATIC1111 @emsi

There's no codebase changes. The beauty of it is that it's the official repo without any modifications just Dockerfile and the docker-compose.yml for ease of use.

@wbudd
Copy link

wbudd commented Feb 24, 2023

Thanks for this!

On first glance, I see two typos.
Dockerfile: CMD ["python", "launch.py", "--listen"] should be CMD ["python3", "launch.py", "--listen"]
docker/run.sh: if [ "$(which dokcer-compose)" ]; then should be if [ "$(which docker-compose)" ]; then

@alexw994
Copy link

Good, but I suggest running prepare_environment method of launch.py before the line of CMD, because of it can avoid the need to download requirements for the first time running the docker image.

@emsi
Copy link
Author

emsi commented Feb 24, 2023

Thanks for this!

On first glance, I see two typos. Dockerfile: CMD ["python", "launch.py", "--listen"] should be CMD ["python3", "launch.py", "--listen"] docker/run.sh: if [ "$(which dokcer-compose)" ]; then should be if [ "$(which docker-compose)" ]; then

Thanks. Nice catch. Fixed in #4426b though it's perfectly fine to call python as the container has only python3 installed as a system-wide python interpreter.

@emsi
Copy link
Author

emsi commented Feb 24, 2023

Good, but I suggest running prepare_environment method of launch.py before the line of CMD, because of it can avoid the need to download requirements for the first time running the docker image.

I was considering that but there are some issues:

  1. Docker images should remain small and contain no data by design.
  2. Models come from external sources and are not the part of stable-diffusion-webui hence those files are not distributed with stable-diffusion-webui. Putting them inside container might suggest otherwise.
  3. User might modify command, for example to add xfromers and thus the set of downloaded requirements will change.

To alleviate that the models directory is defined as VOLUME inside container and also mounted to the model directory in the sources folder so downloading is necessary only once just like it is with usage without docker.
Additionally if the models are already downloaded in the appropriate folder then everything is downloaded only once.

@martinobettucci
Copy link

@eliassama @emsi
Our build is data agnostic, the build only includes the runtime and all configuration, extension or model ever downloaded is available on the external folder /data

@emsi
Copy link
Author

emsi commented Mar 22, 2023

@eliassama @emsi Our build is data agnostic, the build only includes the runtime and all configuration, extension or model ever downloaded is available on the external folder /data

That's exactly how it is implemented.

@AUTOMATIC1111
Copy link
Owner

The problem is I will not make any dockers, I will not be able to maintain this, and even if you will, you'd have to make a PR for me and wait for my approval every time, and I won't be able to review your changes anyway because I don't do docker.

My solution: make it an extension. I realize there can be a problem that someone who wants to make a docker possibly won't want to run the UI on his local machine to go into extensions tab and install the extension from there, but a person who wants to make a docker should have technical competence to clone the extension from git himself. And extension can still exist and be added to the index for visibility.

@emsi
Copy link
Author

emsi commented Mar 28, 2023

The problem is I will not make any dockers, I will not be able to maintain this, and even if you will, you'd have to make a PR for me and wait for my approval every time, and I won't be able to review your changes anyway because I don't do docker.

My solution: make it an extension. I realize there can be a problem that someone who wants to make a docker possibly won't want to run the UI on his local machine to go into extensions tab and install the extension from there, but a person who wants to make a docker should have technical competence to clone the extension from git himself. And extension can still exist and be added to the index for visibility.

I understand your point but having docker as a plugin makes no sense. The whole point about container is to secure against running untrusted code like plug-ins or models from the internet (which are de facto code). This way I can test some random stuff without fear. Also docker is very useful for testing, so the container should come before running the UI.

There's another idea that comes to my mind: perhaps you could point to my repo and repo mentioned by @eliassama in the documentation as unofficial docker image sources? I'll set up my repo with official stable-diffusion-webui as submodule so it always remains up to date. Other than that there won't be much updates to the docker as it just wraps the official code into container.

If you're OK with that I'll set up a dedicated repo.

@Brightest08
Copy link

I used your Dockerfile to build an image, but it failed to run with an error message. What could be the issue?

AssertionError: Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check

@emsi
Copy link
Author

emsi commented May 6, 2023

I used your Dockerfile to build an image, but it failed to run with an error message. What could be the issue?

AssertionError: Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check

It's a limitation of docker as it does not honor the runtime argument during build.
The simplest workaround, when building on your local computer is to use nvidia runtime as default runtime. To do so edit your /etc/docker/daemon.json ad add "default-runtime": "nvidia" so it looks something like this:

{
    "runtimes": {
        "nvidia": {
            "path": "nvidia-container-runtime",
            "runtimeArgs": []
        }
    },
    "default-runtime": "nvidia"
}

If you don't have the Nvidia runtime you should install it first:
https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html#installation-guide

There's not point in using this docker without GPU and the check is there to make sure you are aware of the potential issue.

Copy link

@undrash undrash left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is pretty awesome stuff, thanks! Have you tried running it on ECS?

@Brightest08
Copy link

I used your Dockerfile to build an image, but it failed to run with an error message. What could be the issue?
AssertionError: Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check

It's a limitation of docker as it does not honor the runtime argument during build. The simplest workaround, when building on your local computer is to use nvidia runtime as default runtime. To do so edit your /etc/docker/daemon.json ad add "default-runtime": "nvidia" so it looks something like this:

{
    "runtimes": {
        "nvidia": {
            "path": "nvidia-container-runtime",
            "runtimeArgs": []
        }
    },
    "default-runtime": "nvidia"
}

If you don't have the Nvidia runtime you should install it first: https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html#installation-guide

There's not point in using this docker without GPU and the check is there to make sure you are aware of the potential issue.

I thought that adding the "--skip-torch-cuda-test" running parameter would make it work in CPU mode, but I wanted it to use the GPU, so I removed this parameter. I am not very familiar with this matter, so I would like to ask if adding the "--skip-torch-cuda-test" running parameter will make the container work in GPU mode after the image is built and the container is run. Thank you for your answer, I really appreciate it.

@emsi
Copy link
Author

emsi commented May 10, 2023

This is pretty awesome stuff, thanks! Have you tried running it on ECS?

Nope but it should work as long as there is Nvidia runtime installed and drivers on the host.
I did try it though on GCP without issues :)

…on-tab-switch

Fix notification not playing when built-in webui tab is inactive
…ll-extensions

Honor `--skip-install` for extension installers
…-blank-stdout

Don't print blank stdout in extension installers
…isy-latent

Add noisy latent to `ExtraNoiseParams` for callback
…ngs-dropdown-unfocus

Do not change quicksettings dropdown option when value returned is `None`
[RC 1.6.0 - zoom is partly hidden] Update style.css
…ta-path-001

display file metadata `path` , `ss_output_name`
…ime-001

chore: change extension time format
…ime-format-time-zone

patch Extension time format in systme time zone
…vert-to-system-time-zone

extension update time, convert to system time zone
…er fields that are already covered by sysinfo
…dvram would cause an exception when generating
Copy link

@shamblessed shamblessed left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Dpck

@catboxanon
Copy link
Collaborator

As has been mentioned several times, Docker support will not be added directly to this repo, so I'm going to go ahead and close this PR. However, I've now added some links to these on the wiki, which are also linked to on the wiki homepage.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet