Skip to content

Latest commit

History

History
32 lines (19 loc) 路 1.68 KB

lora.md

File metadata and controls

32 lines (19 loc) 路 1.68 KB

LoRA

LoRA is a fast and lightweight training method that inserts and trains a significantly smaller number of parameters instead of all the model parameters. This produces a smaller file (~100 MBs) and makes it easier to quickly train a model to learn a new concept. LoRA weights are typically loaded into the UNet, text encoder or both. There are two classes for loading LoRA weights:

  • [LoraLoaderMixin] provides functions for loading and unloading, fusing and unfusing, enabling and disabling, and more functions for managing LoRA weights. This class can be used with any model.
  • [StableDiffusionXLLoraLoaderMixin] is a Stable Diffusion (SDXL) version of the [LoraLoaderMixin] class for loading and saving LoRA weights. It can only be used with the SDXL model.

To learn more about how to load LoRA weights, see the LoRA loading guide.

LoraLoaderMixin

[[autodoc]] loaders.lora.LoraLoaderMixin

StableDiffusionXLLoraLoaderMixin

[[autodoc]] loaders.lora.StableDiffusionXLLoraLoaderMixin