- GPU sharing cloud service
Vast.ai
: https://cloud.vast.ai/ - Setup memo :
2023/09/01 ver
for AUTOMATIC1111/stable-diffusion-webui 1.6.0 (for SDXL 1.0) - Video explaining in Japanese (Youtube) : https://www.youtube.com/watch?v=U4HrpzkinP4
- Setup your local PC(Windows, macOS, Linux,...)'s SSH client setting. Generate a
SSH keypair(private key & public key)
for remote access. - Create a new account for vast.ai
Client Account Type
. - Regist your SSH public key into vast.ai
Account
menu. - Add credit 10 USD with your credit card on
Billing
menu. I DON'T recommendAuto charge
setting.
- Base docker image :
nvidia/cuda:11.8.0-cudnn8-runtime-ubuntu22.04
- Launch Type:
ssh
- On-start script:
Not set
- Disk Space To Allocate:
over ~60GB
(recommend) - Launch mode :
Run interactive shell server, SSH
. This will allow you to connect and run commands using an SSH client.Checked
: Use direct SSH connection - faster than proxy, but limited to machines with open ports. Proxy ssh still available as backup.
- GPU Type :
Interruptible
andOn-Demand
: both OK- #GPUs :
1X
(if the instance has multi GPUs, the current version of Stable Diffusion uses only 1X GPU.)
- Start a instance based on the above
Instance configuration
. - Wait until the instance loads the docker image and starts up. (5 minutes at the fastest, 20 minutes at the longest)
- Push the
>_ CONNECT
button and copy theDirect ssh connect
command.
- ex)
ssh -p XXXXX root@AAA.BBB.CCC.DDD -L 8080:localhost:8080
- Paste the command into your terminal/command prompt and add
-L 7860:localhost:7860
for browser access (SSH local port fowarding)
- ex)
ssh -p XXXXX root@AAA.BBB.CCC.DDD -L 7860:localhost:7860
- Connect the instance via SSH with the above
4.
ssh command. - Install the AUTOMATIC1111 / stable-diffusion-webui (v1.1.0)
6.1 step1 (as ROOT)
apt-get install vim unzip libgl1-mesa-dev libcairo2-dev wget git -y
apt-get install python3 python3-venv python3-dev build-essential -y
adduser user1 --disabled-password --gecos ""
su user1
6.2 step2 (as user1 = not root user)
cd ~
git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui.git
cd stable-diffusion-webui
# Install and launch AUTOMATIC1111 WebUI (with xformers)
./webui.sh --xformers
7.Access the stable diffusion by your local PC's web browser
- http://localhost:7860
- via SSH port forward by
-L 7860:localhost:7860
- terminate by
Ctrl + C
- restart :
./webui.sh --xformers
su user1
cd ~
cd ~/stable-diffusion-webui/
./webui.sh --xformers
Execute as user1
wget -P /home/user1/stable-diffusion-webui/models/Stable-diffusion/ http://example.com/HogeHogeModel.safetensors
- SDXL 1.0 (base + refiner)
wget -P /home/user1/stable-diffusion-webui/models/Stable-diffusion/ https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/resolve/main/sd_xl_base_1.0.safetensors
wget -P /home/user1/stable-diffusion-webui/models/Stable-diffusion/ https://huggingface.co/stabilityai/stable-diffusion-xl-refiner-1.0/resolve/main/sd_xl_refiner_1.0.safetensors
Execute as user1
wget -P /home/user1/stable-diffusion-webui/models/VAE/ http://example.com/vae-ft-mse-840000-ema-pruned.safetensors
- SDXL 1.0 VAE
wget -P /home/user1/stable-diffusion-webui/models/VAE/ https://huggingface.co/stabilityai/sdxl-vae/resolve/main/sdxl_vae.safetensors
Execute as user1
wget -P /home/user1/stable-diffusion-webui/models/Lora/ http://example.com/HogeHogeLora.safetensors
Execute as user1
wget -P /home/user1/stable-diffusion-webui/embeddings/ http://example.com/EasyNegative.safetensors
Execute from your Local PC's terminal
scp -r -P XXXXX root@AAA.BBB.CCC.DDD:/home/user1/stable-diffusion-webui/outputs ./outputs/
- as a root user
apt install apache2 -y
a2enmod userdir
/etc/init.d/apache2 restart
su user1
- as a
user1
user
cd
mkdir ~/public_html
chmod 711 $HOME
chmod 755 ~/public_html
ln -s ~/stable-diffusion-webui/outputs/ ~/public_html/outputs
- recoonect with port forwarding setting for web server
ssh -p XXXXX root@AAA.BBB.CCC.DDD -L 8080:localhost:80 -L 7860:localhost:7860
- access from local PC with web browser (via SSH port forwarding)
- http://localhost:8080/~user1/
- [1] start a instance which has 2 multi GPUs in vast.ai
- [2] connect the instance with SSH port forward
ssh -p XXXXX root@AAA.BBB.CCC.DDD -L 7860:localhost:7860 -L 7861:localhost:7861
- [3] start AUTOMATIC1111 webui. (1st time) and stop it.
- [4] clone configfile
cd ~/stable-diffusion-webui/
cp ui-config.json ui-config2.json
cp webui.sh webui2.sh
- [5] edit ui-config2.json
- [6] start two
webui
s with&
cd ~/stable-diffusion-webui/
./webui.sh --device-id 0 --listen --port 7860 --xformers --ui-settings-file ui-config.json &
./webui2.sh --device-id 1 --listen --port 7861 --xformers --ui-settings-file ui-config2.json &
- [7] access the two webui with 2 browser
- [8] change save directory name
- Settings ->
Saving to a directory
-> change the name ofDirectory name pattern
from[data]
to0_[date]
for GPU0 /1_[date]
for GPU1
- Settings ->