Skip to content

Commit b6e0b01

Browse files
Lazy Import for Diffusers (huggingface#4829)
* initial commit * move modules to import struct * add dummy objects and _LazyModule * add lazy import to schedulers * clean up unused imports * lazy import on models module * lazy import for schedulers module * add lazy import to pipelines module * lazy import altdiffusion * lazy import audio diffusion * lazy import audioldm * lazy import consistency model * lazy import controlnet * lazy import dance diffusion ddim ddpm * lazy import deepfloyd * lazy import kandinksy * lazy imports * lazy import semantic diffusion * lazy imports * lazy import stable diffusion * move sd output to its own module * clean up * lazy import t2iadapter * lazy import unclip * lazy import versatile and vq diffsuion * lazy import vq diffusion * helper to fetch objects from modules * lazy import sdxl * lazy import txt2vid * lazy import stochastic karras * fix model imports * fix bug * lazy import * clean up * clean up * fixes for tests * fixes for tests * clean up * remove import of torch_utils from utils module * clean up * clean up * fix mistake import statement * dedicated modules for exporting and loading * remove testing utils from utils module * fixes from merge conflicts * Update src/diffusers/pipelines/kandinsky2_2/__init__.py * fix docs * fix alt diffusion copied from * fix check dummies * fix more docs * remove accelerate import from utils module * add type checking * make style * fix check dummies * remove torch import from xformers check * clean up error message * fixes after upstream merges * dummy objects fix * fix tests * remove unused module import --------- Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
1 parent 8873524 commit b6e0b01

File tree

290 files changed

+2885
-1182
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

290 files changed

+2885
-1182
lines changed

docs/source/en/api/utilities.md

Lines changed: 6 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -2,30 +2,26 @@
22

33
Utility and helper functions for working with 🤗 Diffusers.
44

5-
## randn_tensor
6-
7-
[[autodoc]] diffusers.utils.randn_tensor
8-
95
## numpy_to_pil
106

11-
[[autodoc]] utils.pil_utils.numpy_to_pil
7+
[[autodoc]] utils.numpy_to_pil
128

139
## pt_to_pil
1410

15-
[[autodoc]] utils.pil_utils.pt_to_pil
11+
[[autodoc]] utils.pt_to_pil
1612

1713
## load_image
1814

19-
[[autodoc]] utils.testing_utils.load_image
15+
[[autodoc]] utils.load_image
2016

2117
## export_to_gif
2218

23-
[[autodoc]] utils.testing_utils.export_to_gif
19+
[[autodoc]] utils.export_to_gif
2420

2521
## export_to_video
2622

27-
[[autodoc]] utils.testing_utils.export_to_video
23+
[[autodoc]] utils.export_to_video
2824

2925
## make_image_grid
3026

31-
[[autodoc]] utils.pil_utils.make_image_grid
27+
[[autodoc]] utils.pil_utils.make_image_grid

docs/source/en/using-diffusers/reproducibility.md

Lines changed: 11 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -28,7 +28,7 @@ This is why it's important to understand how to control sources of randomness in
2828

2929
## Control randomness
3030

31-
During inference, pipelines rely heavily on random sampling operations which include creating the
31+
During inference, pipelines rely heavily on random sampling operations which include creating the
3232
Gaussian noise tensors to denoise and adding noise to the scheduling step.
3333

3434
Take a look at the tensor values in the [`DDIMPipeline`] after two inference steps:
@@ -47,7 +47,7 @@ image = ddim(num_inference_steps=2, output_type="np").images
4747
print(np.abs(image).sum())
4848
```
4949

50-
Running the code above prints one value, but if you run it again you get a different value. What is going on here?
50+
Running the code above prints one value, but if you run it again you get a different value. What is going on here?
5151

5252
Every time the pipeline is run, [`torch.randn`](https://pytorch.org/docs/stable/generated/torch.randn.html) uses a different random seed to create Gaussian noise which is denoised stepwise. This leads to a different result each time it is run, which is great for diffusion pipelines since it generates a different random image each time.
5353

@@ -81,16 +81,16 @@ If you run this code example on your specific hardware and PyTorch version, you
8181

8282
<Tip>
8383

84-
💡 It might be a bit unintuitive at first to pass `Generator` objects to the pipeline instead of
85-
just integer values representing the seed, but this is the recommended design when dealing with
86-
probabilistic models in PyTorch as `Generator`'s are *random states* that can be
84+
💡 It might be a bit unintuitive at first to pass `Generator` objects to the pipeline instead of
85+
just integer values representing the seed, but this is the recommended design when dealing with
86+
probabilistic models in PyTorch as `Generator`'s are *random states* that can be
8787
passed to multiple pipelines in a sequence.
8888

8989
</Tip>
9090

9191
### GPU
9292

93-
Writing a reproducible pipeline on a GPU is a bit trickier, and full reproducibility across different hardware is not guaranteed because matrix multiplication - which diffusion pipelines require a lot of - is less deterministic on a GPU than a CPU. For example, if you run the same code example above on a GPU:
93+
Writing a reproducible pipeline on a GPU is a bit trickier, and full reproducibility across different hardware is not guaranteed because matrix multiplication - which diffusion pipelines require a lot of - is less deterministic on a GPU than a CPU. For example, if you run the same code example above on a GPU:
9494

9595
```python
9696
import torch
@@ -113,7 +113,7 @@ print(np.abs(image).sum())
113113

114114
The result is not the same even though you're using an identical seed because the GPU uses a different random number generator than the CPU.
115115

116-
To circumvent this problem, 🧨 Diffusers has a [`~diffusers.utils.randn_tensor`] function for creating random noise on the CPU, and then moving the tensor to a GPU if necessary. The `randn_tensor` function is used everywhere inside the pipeline, allowing the user to **always** pass a CPU `Generator` even if the pipeline is run on a GPU.
116+
To circumvent this problem, 🧨 Diffusers has a [`~diffusers.utils.torch_utils.randn_tensor`] function for creating random noise on the CPU, and then moving the tensor to a GPU if necessary. The `randn_tensor` function is used everywhere inside the pipeline, allowing the user to **always** pass a CPU `Generator` even if the pipeline is run on a GPU.
117117

118118
You'll see the results are much closer now!
119119

@@ -139,14 +139,14 @@ print(np.abs(image).sum())
139139
<Tip>
140140

141141
💡 If reproducibility is important, we recommend always passing a CPU generator.
142-
The performance loss is often neglectable, and you'll generate much more similar
142+
The performance loss is often neglectable, and you'll generate much more similar
143143
values than if the pipeline had been run on a GPU.
144144

145145
</Tip>
146146

147-
Finally, for more complex pipelines such as [`UnCLIPPipeline`], these are often extremely
148-
susceptible to precision error propagation. Don't expect similar results across
149-
different GPU hardware or PyTorch versions. In this case, you'll need to run
147+
Finally, for more complex pipelines such as [`UnCLIPPipeline`], these are often extremely
148+
susceptible to precision error propagation. Don't expect similar results across
149+
different GPU hardware or PyTorch versions. In this case, you'll need to run
150150
exactly the same hardware and PyTorch version for full reproducibility.
151151

152152
## Deterministic algorithms

examples/community/clip_guided_images_mixing_stable_diffusion.py

Lines changed: 2 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -19,10 +19,8 @@
1919
UNet2DConditionModel,
2020
)
2121
from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion import StableDiffusionPipelineOutput
22-
from diffusers.utils import (
23-
PIL_INTERPOLATION,
24-
randn_tensor,
25-
)
22+
from diffusers.utils import PIL_INTERPOLATION
23+
from diffusers.utils.torch_utils import randn_tensor
2624

2725

2826
def preprocess(image, w, h):

examples/community/clip_guided_stable_diffusion_img2img.py

Lines changed: 2 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -19,11 +19,8 @@
1919
UNet2DConditionModel,
2020
)
2121
from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion import StableDiffusionPipelineOutput
22-
from diffusers.utils import (
23-
PIL_INTERPOLATION,
24-
deprecate,
25-
randn_tensor,
26-
)
22+
from diffusers.utils import PIL_INTERPOLATION, deprecate
23+
from diffusers.utils.torch_utils import randn_tensor
2724

2825

2926
EXAMPLE_DOC_STRING = """

examples/community/ddim_noise_comparative_analysis.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -20,7 +20,7 @@
2020

2121
from diffusers.pipeline_utils import DiffusionPipeline, ImagePipelineOutput
2222
from diffusers.schedulers import DDIMScheduler
23-
from diffusers.utils import randn_tensor
23+
from diffusers.utils.torch_utils import randn_tensor
2424

2525

2626
trans = transforms.Compose(

examples/community/lpw_stable_diffusion.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -21,8 +21,8 @@
2121
is_accelerate_available,
2222
is_accelerate_version,
2323
logging,
24-
randn_tensor,
2524
)
25+
from diffusers.utils.torch_utils import randn_tensor
2626

2727

2828
# ------------------------------------------------------------------------------

examples/community/lpw_stable_diffusion_xl.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -30,9 +30,9 @@
3030
is_accelerate_version,
3131
is_invisible_watermark_available,
3232
logging,
33-
randn_tensor,
3433
replace_example_docstring,
3534
)
35+
from diffusers.utils.torch_utils import randn_tensor
3636

3737

3838
if is_invisible_watermark_available():

examples/community/pipeline_fabric.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -14,6 +14,7 @@
1414
from typing import List, Optional, Union
1515

1616
import torch
17+
from diffuser.utils.torch_utils import randn_tensor
1718
from packaging import version
1819
from PIL import Image
1920
from transformers import CLIPTextModel, CLIPTokenizer
@@ -30,7 +31,6 @@
3031
from diffusers.utils import (
3132
deprecate,
3233
logging,
33-
randn_tensor,
3434
replace_example_docstring,
3535
)
3636

examples/community/pipeline_zero1to3.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -35,9 +35,9 @@
3535
is_accelerate_available,
3636
is_accelerate_version,
3737
logging,
38-
randn_tensor,
3938
replace_example_docstring,
4039
)
40+
from diffusers.utils.torch_utils import randn_tensor
4141

4242

4343
logger = logging.get_logger(__name__) # pylint: disable=invalid-name

examples/community/run_onnx_controlnet.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,6 +8,7 @@
88
import numpy as np
99
import PIL.Image
1010
import torch
11+
from diffuser.utils.torch_utils import randn_tensor
1112
from PIL import Image
1213
from transformers import CLIPTokenizer
1314

@@ -19,7 +20,6 @@
1920
from diffusers.utils import (
2021
deprecate,
2122
logging,
22-
randn_tensor,
2323
replace_example_docstring,
2424
)
2525

0 commit comments

Comments
 (0)