Minimal docker files to run AUTOMATIC1111's stable-diffusion-webui on Google Compute Engine.
I have tested it on,
- Google Compute Engine (GCE) with Ubuntu 22.04 VM image with NVIDIA L4 and T4
- macOS 13.2.1 without GPUs
If you use your local machine, you can skip to Build image
section.
- Open https://console.cloud.google.com/compute/instances
- Press
CREATE INSTANCE
- Name -
instance-1
# or any name you like - Region -
us-central1 (Iowa)
- Zone -
us-central1-a
# NVIDIA L4 is available inus-central1-a
- Machine configuration -
GPUs
- GPU type -
NVIDIA L4
# or T4 - Machine type -
g2-standard-4 (4 vCPU, 16GB memory)
# orn1-standard-4
for T4
- GPU type -
- Boot disk -
CHANGE
- Operating system -
Ubuntu
- Version -
Ubuntu 22.04 LTS
# description isx86/64, amd64 jammy image built on ...
- Boot disk type -
SSD persistent disk
# orBalanced persistent disk
but SSD is not so expensive - Size (GB) -
50
#50
is enough for inference (not training) - Press
SELECT
- Operating system -
- Identity and API access # if you use GCS buckets to save outputs
- Access scopes -
Set access for each API
- Storage -
Full
- Storage -
- Access scopes -
- Advanced options
- Management
- Availability policies
- VM Provisioning model -
Spot
# if you prefer low-cost instead of stability
- VM Provisioning model -
- Availability policies
- Management
- Name -
- Press
CREATE
Monthly estimate should be like this.
Monthly estimate
$163.30
That's about $0.22 hourly
After you press CREATE
, instances table will show.
- Commands to create an instance with gcloud CLI
If you prefer to use gcloud CLI, you can use the following commands.
export PROJECT_ID="stable-diffusion-367007" # change to your project ID
export ZONE="us-central1-a"
export INSTANCE_NAME="instance-1"
export MACHINE_TYPE="g2-standard-4" # for NVIDIA L4
# export MACHINE_TYPE="n1-standard-4" # for NVIDIA T4
export SCOPES="default,storage-full"
export IMAGE_PROJECT="ubuntu-os-cloud"
export IMAGE_FAMILY="ubuntu-2204-lts"
export DISK_NAME="disk-1"
export DISK_SIZE="50GB"
export DISK_TYPE="pd-ssd"
export ACCELERATOR="nvidia-l4" # for NVIDIA L4
# export ACCELERATOR="nvidia-tesla-t4" # for NVIDIA T4
export PROVISIONING_MODEL="SPOT" # or "STANDARD"
gcloud compute instances create $INSTANCE_NAME \
--project=$PROJECT_ID \
--zone=$ZONE \
--machine-type=$MACHINE_TYPE \
--scopes=$SCOPES \
--create-disk=boot=yes,image-project=${IMAGE_PROJECT},image-family=${IMAGE_FAMILY},name=${DISK_NAME},size=${DISK_SIZE},type=${DISK_TYPE} \
--accelerator=count=1,type=${ACCELERATOR} \
--provisioning-model=$PROVISIONING_MODEL
gcloud compute instances describe $INSTANCE_NAME --project=$PROJECT_ID --zone=$ZONE
gcloud compute instances list --project=$PROJECT_ID
- Press triangle on the right of
SSH
View gcloud command
COPY TO CLIPBOARD
Open a terminal on the local machine and paste the clipboard and add -- -L 7860:localhost:7860
for port forwarding.
gcloud compute ssh --zone=$ZONE $INSTANCE_NAME --project=$PROJECT_ID -- -L 7860:localhost:7860
git clone https://github.com/susumuota/stable-diffusion-minimal-docker.git
bash ./stable-diffusion-minimal-docker/gce/create_dotfiles.sh
bash ./stable-diffusion-minimal-docker/gce/install_cuda_drivers.sh
sudo reboot
# and ssh again
Start screen
. Sometimes ssh connection gets lost. You can recover session with screen -r
.
screen
bash ./stable-diffusion-minimal-docker/gce/install_webui.sh && cd stable-diffusion-webui
sudo apt-get install -y aria2
aria2c -x 5 "https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.safetensors" -d "models/Stable-diffusion" -o "v1-5-pruned-emaonly.safetensors"
aria2c -x 5 "https://huggingface.co/stabilityai/sd-vae-ft-mse-original/resolve/main/vae-ft-mse-840000-ema-pruned.safetensors" -d "models/VAE" -o "vae-ft-mse-840000-ema-pruned.safetensors"
./webui.sh --xformers
bash ./stable-diffusion-minimal-docker/gce/install_docker.sh
sudo reboot
# and ssh again
bash ./stable-diffusion-minimal-docker/gce/install_nvidia_container_toolkit.sh
# you don't need to reboot here
Make sure you cloned this repository.
git clone https://github.com/susumuota/stable-diffusion-minimal-docker.git
cd stable-diffusion-minimal-docker
Build docker image.
cd webui # or webui-cpu
docker compose build
sudo apt-get install -y aria2
aria2c -x 5 "https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.safetensors" -d "models/Stable-diffusion" -o "v1-5-pruned-emaonly.safetensors"
aria2c -x 5 "https://huggingface.co/stabilityai/sd-vae-ft-mse-original/resolve/main/vae-ft-mse-840000-ema-pruned.safetensors" -d "models/VAE" -o "vae-ft-mse-840000-ema-pruned.safetensors"
docker compose up -d # start webui in background
docker compose logs --no-log-prefix -f # show logs
Access http://localhost:7860/
docker compose down
Create a bucket on GCS. e.g. gs://sd-outputs-1
. Location us-central1
must be same as the instance.
gsutil mb -l us-central1 gs://sd-outputs-1
On GCE instance,
cd ~/stable-diffusion-webui
bash ~/stable-diffusion-minimal-docker/gce/rsync_remote.sh
On local machine,
bash ~/stable-diffusion-minimal-docker/gce/rsync_local.sh
Then, check outputs
directory on local machine periodically.
- https://console.cloud.google.com/compute/instances
- https://cloud.google.com/compute/docs/instances/stop-start-instance#billing
DON'T FORGET TO DELETE INSTANCES
- Select
instance-1
in the VM list. - Press the
DELETE
button.
gcloud compute instances delete $INSTANCE_NAME --project=$PROJECT_ID --zone=$ZONE
gcloud compute instances list --project=$PROJECT_ID
If you DELETE
the VM instance, you will not be charged (as far as I know).
However, if you STOP
the VM instance, you will be charged for resources (e.g. persistent disk) until you DELETE
it. You should DELETE
it if you are not going to use it for a long time (although you will have to setup the environment again).
If you want to confirm that you will no longer be charged, delete the project.
- https://github.com/AUTOMATIC1111/stable-diffusion-webui
- https://github.com/AbdBarho/stable-diffusion-webui-docker
MIT License, See LICENSE file.
Susumu OTA