From 41d8e074ee52988d3a987d32dee1dd867262e22e Mon Sep 17 00:00:00 2001 From: Dhruv Nair Date: Mon, 19 Feb 2024 08:40:48 +0000 Subject: [PATCH 1/3] update --- docs/source/en/api/pipelines/animatediff.md | 85 +++++++++++++++++++++ 1 file changed, 85 insertions(+) diff --git a/docs/source/en/api/pipelines/animatediff.md b/docs/source/en/api/pipelines/animatediff.md index 817bc1b19eb5..c4002f1727d1 100644 --- a/docs/source/en/api/pipelines/animatediff.md +++ b/docs/source/en/api/pipelines/animatediff.md @@ -408,6 +408,91 @@ Make sure to check out the Schedulers [guide](../../using-diffusers/schedulers) +## Using AnimateLCM + +[AnimateLCM](https://animatelcm.github.io/) is a Motion Module checkpoint and an LCM LoRA created using a consistency learning strategy that decouples the distillation of the image generation priors and the motion generation priors. + +```python +import torch +from diffusers import AnimateDiffPipeline, LCMScheduler, MotionAdapter +from diffusers.utils import export_to_gif + +adapter = MotionAdapter.from_pretrained("wangfuyun/AnimateLCM") +pipe = AnimateDiffPipeline.from_pretrained("emilianJR/epiCRealism", motion_adapter=adapter) +pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config, beta_schedule="linear") + +pipe.load_lora_weights("wangfuyun/AnimateLCM", weight_name="sd15_lora_beta.safetensors", adapter_name="lcm-lora") + +pipe.enable_vae_slicing() +pipe.enable_model_cpu_offload() + +output = pipe( + prompt="A space rocket with trails of smoke behind it launching into space from the desert, 4k, high resolution", + negative_prompt="bad quality, worse quality, low resolution", + num_frames=16, + guidance_scale=1.5, + num_inference_steps=6, + generator=torch.Generator("cpu").manual_seed(0), +) +frames = output.frames[0] +export_to_gif(frames, "animatelcm.gif") +``` + + + + + +
+ A space rocket, 4K. +
+ masterpiece, bestquality, sunset +
+ +AnimateLCM is also compatible with existing [Motion LoRAs](https://huggingface.co/collections/dn6/animatediff-motion-loras-654cb8ad732b9e3cf4d3c17e). + +```python +import torch +from diffusers import AnimateDiffPipeline, LCMScheduler, MotionAdapter +from diffusers.utils import export_to_gif + +adapter = MotionAdapter.from_pretrained("wangfuyun/AnimateLCM") +pipe = AnimateDiffPipeline.from_pretrained("emilianJR/epiCRealism", motion_adapter=adapter) +pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config, beta_schedule="linear") + +pipe.load_lora_weights("wangfuyun/AnimateLCM", weight_name="sd15_lora_beta.safetensors", adapter_name="lcm-lora") +pipe.load_lora_weights("guoyww/animatediff-motion-lora-tilt-up", adapter_name="tilt-up") + +pipe.set_adapters(["lcm-lora", "tilt-up"], [1.0, 0.8]) +pipe.enable_vae_slicing() +pipe.enable_model_cpu_offload() + +output = pipe( + prompt="A space rocket with trails of smoke behind it launching into space from the desert, 4k, high resolution", + negative_prompt="bad quality, worse quality, low resolution", + num_frames=16, + guidance_scale=1.5, + num_inference_steps=6, + generator=torch.Generator("cpu").manual_seed(0), +) +frames = output.frames[0] +export_to_gif(frames, "animatelcm-motion-lora.gif") +``` + + + + + +
+ A space rocket, 4K. +
+ masterpiece, bestquality, sunset +
+ + ## AnimateDiffPipeline [[autodoc]] AnimateDiffPipeline From b544b408a646dc0281f11f147eb3a191eacec598 Mon Sep 17 00:00:00 2001 From: Dhruv Nair Date: Mon, 19 Feb 2024 15:13:54 +0000 Subject: [PATCH 2/3] update --- docs/source/en/api/pipelines/animatediff.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/source/en/api/pipelines/animatediff.md b/docs/source/en/api/pipelines/animatediff.md index c4002f1727d1..99e4032d991f 100644 --- a/docs/source/en/api/pipelines/animatediff.md +++ b/docs/source/en/api/pipelines/animatediff.md @@ -410,7 +410,7 @@ Make sure to check out the Schedulers [guide](../../using-diffusers/schedulers) ## Using AnimateLCM -[AnimateLCM](https://animatelcm.github.io/) is a Motion Module checkpoint and an LCM LoRA created using a consistency learning strategy that decouples the distillation of the image generation priors and the motion generation priors. +[AnimateLCM](https://animatelcm.github.io/) is a motion module checkpoint and an [LCM LoRA](https://huggingface.co/docs/diffusers/using-diffusers/inference_with_lcm_lora)created using a consistency learning strategy that decouples the distillation of the image generation priors and the motion generation priors. ```python import torch From a17d8757cab5936732b95ee107a3520303dac830 Mon Sep 17 00:00:00 2001 From: Dhruv Nair Date: Mon, 19 Feb 2024 16:13:45 +0000 Subject: [PATCH 3/3] update --- docs/source/en/api/pipelines/animatediff.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/source/en/api/pipelines/animatediff.md b/docs/source/en/api/pipelines/animatediff.md index 99e4032d991f..62a132c77ae1 100644 --- a/docs/source/en/api/pipelines/animatediff.md +++ b/docs/source/en/api/pipelines/animatediff.md @@ -410,7 +410,7 @@ Make sure to check out the Schedulers [guide](../../using-diffusers/schedulers) ## Using AnimateLCM -[AnimateLCM](https://animatelcm.github.io/) is a motion module checkpoint and an [LCM LoRA](https://huggingface.co/docs/diffusers/using-diffusers/inference_with_lcm_lora)created using a consistency learning strategy that decouples the distillation of the image generation priors and the motion generation priors. +[AnimateLCM](https://animatelcm.github.io/) is a motion module checkpoint and an [LCM LoRA](https://huggingface.co/docs/diffusers/using-diffusers/inference_with_lcm_lora) that have been created using a consistency learning strategy that decouples the distillation of the image generation priors and the motion generation priors. ```python import torch