You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Here are the commands you need to run inside the containers:
sdxl-finetune
bind-mount /config.toml, /input and /output
config.toml should contain
# for sdxl fine tuning
[general]
enable_bucket = true # Whether to use Aspect Ratio Bucketing
[[datasets]]
resolution = 1024 # Training resolution
batch_size = 4 # Batch size
[[datasets.subsets]]
image_dir = '/input' # Specify the folder containing the training images
caption_extension = '.txt' # Caption file extension; change this if using .txt
num_repeats = 10 # Number of repetitions for training images
The input should be a folder of images with captions of images in text files e.g. foo.jpg should have a foo.txt with a caption
Based on https://github.com/kohya-ss/sd-scripts
sdxl-inference
Given an input lora in bind-mounted /input directory, inference is then just:
accelerate launch --num_cpu_threads_per_process 1 sdxl_minimal_inference.py \
--ckpt_path=sdxl/sd_xl_base_1.0.safetensors \
--lora_weights=/input/lora.safetensors \
--prompt="cj hole for sale sign in front of a posh house with a tesla in winter with snow" \
--output_dir=/output
We want four new Lilypad modules:
Dockerfiles:
Docker images:
They should copy the formula of: https://github.com/bacalhau-project/lilypad-module-lora-training and https://github.com/bacalhau-project/lilypad-module-lora-inference which is the same for stable diffusion 1.5
Here are the commands you need to run inside the containers:
sdxl-finetune
bind-mount /config.toml, /input and /output
config.toml should contain
The input should be a folder of images with captions of images in text files e.g. foo.jpg should have a foo.txt with a caption
Based on https://github.com/kohya-ss/sd-scripts
sdxl-inference
Given an input lora in bind-mounted /input directory, inference is then just:
mistral-finetune
mistral-inference
The text was updated successfully, but these errors were encountered: