diff --git a/examples/dreambooth/README_sana.md b/examples/dreambooth/README_sana.md index fe861d62472b..d82529c64de8 100644 --- a/examples/dreambooth/README_sana.md +++ b/examples/dreambooth/README_sana.md @@ -73,7 +73,7 @@ This will also allow us to push the trained LoRA parameters to the Hugging Face Now, we can launch training using: ```bash -export MODEL_NAME="Efficient-Large-Model/Sana_1600M_1024px_diffusers" +export MODEL_NAME="Efficient-Large-Model/Sana_1600M_1024px_BF16_diffusers" export INSTANCE_DIR="dog" export OUTPUT_DIR="trained-sana-lora" @@ -124,4 +124,4 @@ We provide several options for optimizing memory optimization: * `cache_latents`: When enabled, we will pre-compute the latents from the input images with the VAE and remove the VAE from memory once done. * `--use_8bit_adam`: When enabled, we will use the 8bit version of AdamW provided by the `bitsandbytes` library. -Refer to the [official documentation](https://huggingface.co/docs/diffusers/main/en/api/pipelines/sana) of the `SanaPipeline` to know more about the models available under the SANA family and their preferred dtypes during inference. \ No newline at end of file +Refer to the [official documentation](https://huggingface.co/docs/diffusers/main/en/api/pipelines/sana) of the `SanaPipeline` to know more about the models available under the SANA family and their preferred dtypes during inference.