Troubleshooting
To get the most out of your training a card with at least 12GB of VRAM is reccomended.
Supported currently are only 10GB and higher VRAM GPUs
- High Batch Size
- Set Gradients to None When Zeroing
- Use EMA
- Full Precision
- Default Memory attention
- Cache Latents
- Text Encoder
- Low Batch Size
- Gradient Checkpointing
- fp16/bf16 precision
- xformers/flash_attention
- Step Ratio of Text Encoder Training 0 (no text encoder)
WIP
Here's a bunch of random stuff I added that seemed useful, but didn't seem to fit anywhere else.
Preview Prompts - Return a JSON string of the prompts that will be used for training. It's not pretty, but you can tell if things are going to work right.
Generate Sample Image
- Generate a sample using the specified seed and prompt below.
Sample Prompt
- What the sample should be.
Sample Seed
- The seed to use for your sample. Leave at -1 to use a random seed.
Train Imagic Only
- Imagic is basically dreambooth, but uses only one image and is significantly faster.
If using Imagic, the first image in the first concept's Instance Data Dir will be used for training.
See https://github.com/ShivamShrirao/diffusers/tree/main/examples/imagic for more details.
Wiki
Getting Started
Advanced Stuffs
- All settings explained
- API
- Batch Size
- Gradient Accumulation
- Learning Rate Scheduler
- Warmup
- Bucketing
- Captioning
Troubleshooting
- Out of memory
- Overtrain
- Debugging
- Optimal Batch Size
- Commits Hall of Fame
- Extremely Experimental Libs
Changelog of main revisions
- [Changelog]