LoRA is a fast and lightweight training method that inserts and trains a significantly smaller number of parameters instead of all the model parameters. This produces a smaller file (~100 MBs) and makes it easier to quickly train a model to learn a new concept. LoRA weights are typically loaded into the UNet, text encoder or both. There are two classes for loading LoRA weights:
- [
LoraLoaderMixin
] provides functions for loading and unloading, fusing and unfusing, enabling and disabling, and more functions for managing LoRA weights. This class can be used with any model. - [
StableDiffusionXLLoraLoaderMixin
] is a Stable Diffusion (SDXL) version of the [LoraLoaderMixin
] class for loading and saving LoRA weights. It can only be used with the SDXL model.
To learn more about how to load LoRA weights, see the LoRA loading guide.
[[autodoc]] loaders.lora.LoraLoaderMixin
[[autodoc]] loaders.lora.StableDiffusionXLLoraLoaderMixin