diff --git a/docs/source/en/api/utilities.md b/docs/source/en/api/utilities.md
index 9edf3e37218a..abc38416053a 100644
--- a/docs/source/en/api/utilities.md
+++ b/docs/source/en/api/utilities.md
@@ -2,30 +2,26 @@
Utility and helper functions for working with ๐ค Diffusers.
-## randn_tensor
-
-[[autodoc]] diffusers.utils.randn_tensor
-
## numpy_to_pil
-[[autodoc]] utils.pil_utils.numpy_to_pil
+[[autodoc]] utils.numpy_to_pil
## pt_to_pil
-[[autodoc]] utils.pil_utils.pt_to_pil
+[[autodoc]] utils.pt_to_pil
## load_image
-[[autodoc]] utils.testing_utils.load_image
+[[autodoc]] utils.load_image
## export_to_gif
-[[autodoc]] utils.testing_utils.export_to_gif
+[[autodoc]] utils.export_to_gif
## export_to_video
-[[autodoc]] utils.testing_utils.export_to_video
+[[autodoc]] utils.export_to_video
## make_image_grid
-[[autodoc]] utils.pil_utils.make_image_grid
\ No newline at end of file
+[[autodoc]] utils.pil_utils.make_image_grid
diff --git a/docs/source/en/using-diffusers/reproducibility.md b/docs/source/en/using-diffusers/reproducibility.md
index b02ca070a1a2..0da760f0192d 100644
--- a/docs/source/en/using-diffusers/reproducibility.md
+++ b/docs/source/en/using-diffusers/reproducibility.md
@@ -28,7 +28,7 @@ This is why it's important to understand how to control sources of randomness in
## Control randomness
-During inference, pipelines rely heavily on random sampling operations which include creating the
+During inference, pipelines rely heavily on random sampling operations which include creating the
Gaussian noise tensors to denoise and adding noise to the scheduling step.
Take a look at the tensor values in the [`DDIMPipeline`] after two inference steps:
@@ -47,7 +47,7 @@ image = ddim(num_inference_steps=2, output_type="np").images
print(np.abs(image).sum())
```
-Running the code above prints one value, but if you run it again you get a different value. What is going on here?
+Running the code above prints one value, but if you run it again you get a different value. What is going on here?
Every time the pipeline is run, [`torch.randn`](https://pytorch.org/docs/stable/generated/torch.randn.html) uses a different random seed to create Gaussian noise which is denoised stepwise. This leads to a different result each time it is run, which is great for diffusion pipelines since it generates a different random image each time.
@@ -81,16 +81,16 @@ If you run this code example on your specific hardware and PyTorch version, you
-๐ก It might be a bit unintuitive at first to pass `Generator` objects to the pipeline instead of
-just integer values representing the seed, but this is the recommended design when dealing with
-probabilistic models in PyTorch as `Generator`'s are *random states* that can be
+๐ก It might be a bit unintuitive at first to pass `Generator` objects to the pipeline instead of
+just integer values representing the seed, but this is the recommended design when dealing with
+probabilistic models in PyTorch as `Generator`'s are *random states* that can be
passed to multiple pipelines in a sequence.
### GPU
-Writing a reproducible pipeline on a GPU is a bit trickier, and full reproducibility across different hardware is not guaranteed because matrix multiplication - which diffusion pipelines require a lot of - is less deterministic on a GPU than a CPU. For example, if you run the same code example above on a GPU:
+Writing a reproducible pipeline on a GPU is a bit trickier, and full reproducibility across different hardware is not guaranteed because matrix multiplication - which diffusion pipelines require a lot of - is less deterministic on a GPU than a CPU. For example, if you run the same code example above on a GPU:
```python
import torch
@@ -113,7 +113,7 @@ print(np.abs(image).sum())
The result is not the same even though you're using an identical seed because the GPU uses a different random number generator than the CPU.
-To circumvent this problem, ๐งจ Diffusers has a [`~diffusers.utils.randn_tensor`] function for creating random noise on the CPU, and then moving the tensor to a GPU if necessary. The `randn_tensor` function is used everywhere inside the pipeline, allowing the user to **always** pass a CPU `Generator` even if the pipeline is run on a GPU.
+To circumvent this problem, ๐งจ Diffusers has a [`~diffusers.utils.torch_utils.randn_tensor`] function for creating random noise on the CPU, and then moving the tensor to a GPU if necessary. The `randn_tensor` function is used everywhere inside the pipeline, allowing the user to **always** pass a CPU `Generator` even if the pipeline is run on a GPU.
You'll see the results are much closer now!
@@ -139,14 +139,14 @@ print(np.abs(image).sum())
๐ก If reproducibility is important, we recommend always passing a CPU generator.
-The performance loss is often neglectable, and you'll generate much more similar
+The performance loss is often neglectable, and you'll generate much more similar
values than if the pipeline had been run on a GPU.
-Finally, for more complex pipelines such as [`UnCLIPPipeline`], these are often extremely
-susceptible to precision error propagation. Don't expect similar results across
-different GPU hardware or PyTorch versions. In this case, you'll need to run
+Finally, for more complex pipelines such as [`UnCLIPPipeline`], these are often extremely
+susceptible to precision error propagation. Don't expect similar results across
+different GPU hardware or PyTorch versions. In this case, you'll need to run
exactly the same hardware and PyTorch version for full reproducibility.
## Deterministic algorithms
diff --git a/examples/community/clip_guided_images_mixing_stable_diffusion.py b/examples/community/clip_guided_images_mixing_stable_diffusion.py
index 8cf8e595292a..a6b477df6b7f 100644
--- a/examples/community/clip_guided_images_mixing_stable_diffusion.py
+++ b/examples/community/clip_guided_images_mixing_stable_diffusion.py
@@ -19,10 +19,8 @@
UNet2DConditionModel,
)
from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion import StableDiffusionPipelineOutput
-from diffusers.utils import (
- PIL_INTERPOLATION,
- randn_tensor,
-)
+from diffusers.utils import PIL_INTERPOLATION
+from diffusers.utils.torch_utils import randn_tensor
def preprocess(image, w, h):
diff --git a/examples/community/clip_guided_stable_diffusion_img2img.py b/examples/community/clip_guided_stable_diffusion_img2img.py
index a72a5a127c72..ad9ca804058c 100644
--- a/examples/community/clip_guided_stable_diffusion_img2img.py
+++ b/examples/community/clip_guided_stable_diffusion_img2img.py
@@ -19,11 +19,8 @@
UNet2DConditionModel,
)
from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion import StableDiffusionPipelineOutput
-from diffusers.utils import (
- PIL_INTERPOLATION,
- deprecate,
- randn_tensor,
-)
+from diffusers.utils import PIL_INTERPOLATION, deprecate
+from diffusers.utils.torch_utils import randn_tensor
EXAMPLE_DOC_STRING = """
diff --git a/examples/community/ddim_noise_comparative_analysis.py b/examples/community/ddim_noise_comparative_analysis.py
index c4f51c489ff4..e0784fc5138a 100644
--- a/examples/community/ddim_noise_comparative_analysis.py
+++ b/examples/community/ddim_noise_comparative_analysis.py
@@ -20,7 +20,7 @@
from diffusers.pipeline_utils import DiffusionPipeline, ImagePipelineOutput
from diffusers.schedulers import DDIMScheduler
-from diffusers.utils import randn_tensor
+from diffusers.utils.torch_utils import randn_tensor
trans = transforms.Compose(
diff --git a/examples/community/lpw_stable_diffusion.py b/examples/community/lpw_stable_diffusion.py
index 19975e6ded87..89345a8a5eb3 100644
--- a/examples/community/lpw_stable_diffusion.py
+++ b/examples/community/lpw_stable_diffusion.py
@@ -21,8 +21,8 @@
is_accelerate_available,
is_accelerate_version,
logging,
- randn_tensor,
)
+from diffusers.utils.torch_utils import randn_tensor
# ------------------------------------------------------------------------------
diff --git a/examples/community/lpw_stable_diffusion_xl.py b/examples/community/lpw_stable_diffusion_xl.py
index abfbfb5aa1c1..2ee44b95ab0a 100644
--- a/examples/community/lpw_stable_diffusion_xl.py
+++ b/examples/community/lpw_stable_diffusion_xl.py
@@ -30,9 +30,9 @@
is_accelerate_version,
is_invisible_watermark_available,
logging,
- randn_tensor,
replace_example_docstring,
)
+from diffusers.utils.torch_utils import randn_tensor
if is_invisible_watermark_available():
diff --git a/examples/community/pipeline_fabric.py b/examples/community/pipeline_fabric.py
index 456e69cade13..c5783402b36c 100644
--- a/examples/community/pipeline_fabric.py
+++ b/examples/community/pipeline_fabric.py
@@ -14,6 +14,7 @@
from typing import List, Optional, Union
import torch
+from diffuser.utils.torch_utils import randn_tensor
from packaging import version
from PIL import Image
from transformers import CLIPTextModel, CLIPTokenizer
@@ -30,7 +31,6 @@
from diffusers.utils import (
deprecate,
logging,
- randn_tensor,
replace_example_docstring,
)
diff --git a/examples/community/pipeline_zero1to3.py b/examples/community/pipeline_zero1to3.py
index 8dc6874d2a86..c58d18508196 100644
--- a/examples/community/pipeline_zero1to3.py
+++ b/examples/community/pipeline_zero1to3.py
@@ -35,9 +35,9 @@
is_accelerate_available,
is_accelerate_version,
logging,
- randn_tensor,
replace_example_docstring,
)
+from diffusers.utils.torch_utils import randn_tensor
logger = logging.get_logger(__name__) # pylint: disable=invalid-name
diff --git a/examples/community/run_onnx_controlnet.py b/examples/community/run_onnx_controlnet.py
index a79942b63a59..877138a408d2 100644
--- a/examples/community/run_onnx_controlnet.py
+++ b/examples/community/run_onnx_controlnet.py
@@ -8,6 +8,7 @@
import numpy as np
import PIL.Image
import torch
+from diffuser.utils.torch_utils import randn_tensor
from PIL import Image
from transformers import CLIPTokenizer
@@ -19,7 +20,6 @@
from diffusers.utils import (
deprecate,
logging,
- randn_tensor,
replace_example_docstring,
)
diff --git a/examples/community/run_tensorrt_controlnet.py b/examples/community/run_tensorrt_controlnet.py
index a9030663c12f..e3800be542ad 100644
--- a/examples/community/run_tensorrt_controlnet.py
+++ b/examples/community/run_tensorrt_controlnet.py
@@ -11,6 +11,7 @@
import pycuda.driver as cuda
import tensorrt as trt
import torch
+from diffuser.utils.torch_utils import randn_tensor
from PIL import Image
from pycuda.tools import make_default_context
from transformers import CLIPTokenizer
@@ -23,7 +24,6 @@
from diffusers.utils import (
deprecate,
logging,
- randn_tensor,
replace_example_docstring,
)
diff --git a/examples/community/stable_diffusion_controlnet_img2img.py b/examples/community/stable_diffusion_controlnet_img2img.py
index 200e9a62abb9..71009fb1aa69 100644
--- a/examples/community/stable_diffusion_controlnet_img2img.py
+++ b/examples/community/stable_diffusion_controlnet_img2img.py
@@ -16,9 +16,9 @@
PIL_INTERPOLATION,
is_accelerate_available,
is_accelerate_version,
- randn_tensor,
replace_example_docstring,
)
+from diffusers.utils.torch_utils import randn_tensor
logger = logging.get_logger(__name__) # pylint: disable=invalid-name
diff --git a/examples/community/stable_diffusion_controlnet_inpaint.py b/examples/community/stable_diffusion_controlnet_inpaint.py
index 9f36363fb124..3cd9f9f0a258 100644
--- a/examples/community/stable_diffusion_controlnet_inpaint.py
+++ b/examples/community/stable_diffusion_controlnet_inpaint.py
@@ -17,9 +17,9 @@
PIL_INTERPOLATION,
is_accelerate_available,
is_accelerate_version,
- randn_tensor,
replace_example_docstring,
)
+from diffusers.utils.torch_utils import randn_tensor
logger = logging.get_logger(__name__) # pylint: disable=invalid-name
diff --git a/examples/community/stable_diffusion_controlnet_inpaint_img2img.py b/examples/community/stable_diffusion_controlnet_inpaint_img2img.py
index 2f2acebe9aa0..341e89398f7d 100644
--- a/examples/community/stable_diffusion_controlnet_inpaint_img2img.py
+++ b/examples/community/stable_diffusion_controlnet_inpaint_img2img.py
@@ -16,9 +16,9 @@
PIL_INTERPOLATION,
is_accelerate_available,
is_accelerate_version,
- randn_tensor,
replace_example_docstring,
)
+from diffusers.utils.torch_utils import randn_tensor
logger = logging.get_logger(__name__) # pylint: disable=invalid-name
diff --git a/examples/community/stable_diffusion_controlnet_reference.py b/examples/community/stable_diffusion_controlnet_reference.py
index 1503f9f6a883..0814c6b22af9 100644
--- a/examples/community/stable_diffusion_controlnet_reference.py
+++ b/examples/community/stable_diffusion_controlnet_reference.py
@@ -11,7 +11,8 @@
from diffusers.models.unet_2d_blocks import CrossAttnDownBlock2D, CrossAttnUpBlock2D, DownBlock2D, UpBlock2D
from diffusers.pipelines.controlnet.multicontrolnet import MultiControlNetModel
from diffusers.pipelines.stable_diffusion import StableDiffusionPipelineOutput
-from diffusers.utils import is_compiled_module, logging, randn_tensor
+from diffusers.utils import logging
+from diffusers.utils.torch_utils import is_compiled_module, randn_tensor
logger = logging.get_logger(__name__) # pylint: disable=invalid-name
diff --git a/examples/community/stable_diffusion_ipex.py b/examples/community/stable_diffusion_ipex.py
index 146acb773a56..bef575559e07 100644
--- a/examples/community/stable_diffusion_ipex.py
+++ b/examples/community/stable_diffusion_ipex.py
@@ -31,9 +31,9 @@
is_accelerate_available,
is_accelerate_version,
logging,
- randn_tensor,
replace_example_docstring,
)
+from diffusers.utils.torch_utils import randn_tensor
logger = logging.get_logger(__name__) # pylint: disable=invalid-name
diff --git a/examples/community/stable_diffusion_reference.py b/examples/community/stable_diffusion_reference.py
index 68e30f15bce6..3f46e05f653f 100644
--- a/examples/community/stable_diffusion_reference.py
+++ b/examples/community/stable_diffusion_reference.py
@@ -10,7 +10,8 @@
from diffusers.models.unet_2d_blocks import CrossAttnDownBlock2D, CrossAttnUpBlock2D, DownBlock2D, UpBlock2D
from diffusers.pipelines.stable_diffusion import StableDiffusionPipelineOutput
from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion import rescale_noise_cfg
-from diffusers.utils import PIL_INTERPOLATION, logging, randn_tensor
+from diffusers.utils import PIL_INTERPOLATION, logging
+from diffusers.utils.torch_utils import randn_tensor
logger = logging.get_logger(__name__) # pylint: disable=invalid-name
diff --git a/examples/community/stable_diffusion_repaint.py b/examples/community/stable_diffusion_repaint.py
index 3fd63d4b213a..dd0c9f683ec6 100644
--- a/examples/community/stable_diffusion_repaint.py
+++ b/examples/community/stable_diffusion_repaint.py
@@ -33,8 +33,8 @@
is_accelerate_available,
is_accelerate_version,
logging,
- randn_tensor,
)
+from diffusers.utils.torch_utils import randn_tensor
logger = logging.get_logger(__name__) # pylint: disable=invalid-name
diff --git a/examples/community/stable_diffusion_xl_reference.py b/examples/community/stable_diffusion_xl_reference.py
index b47c962701b6..7549135b220f 100644
--- a/examples/community/stable_diffusion_xl_reference.py
+++ b/examples/community/stable_diffusion_xl_reference.py
@@ -15,7 +15,8 @@
UpBlock2D,
)
from diffusers.pipelines.stable_diffusion_xl import StableDiffusionXLPipelineOutput
-from diffusers.utils import PIL_INTERPOLATION, logging, randn_tensor
+from diffusers.utils import PIL_INTERPOLATION, logging
+from diffusers.utils.torch_utils import randn_tensor
logger = logging.get_logger(__name__) # pylint: disable=invalid-name
diff --git a/examples/community/stable_unclip.py b/examples/community/stable_unclip.py
index 1b438c8fcb3e..6acca20d6a78 100644
--- a/examples/community/stable_unclip.py
+++ b/examples/community/stable_unclip.py
@@ -8,7 +8,8 @@
from diffusers.models import PriorTransformer
from diffusers.pipelines import DiffusionPipeline, StableDiffusionImageVariationPipeline
from diffusers.schedulers import UnCLIPScheduler
-from diffusers.utils import logging, randn_tensor
+from diffusers.utils import logging
+from diffusers.utils.torch_utils import randn_tensor
logger = logging.get_logger(__name__) # pylint: disable=invalid-name
diff --git a/examples/community/unclip_image_interpolation.py b/examples/community/unclip_image_interpolation.py
index 618ac25bdc95..98d88bb90c23 100644
--- a/examples/community/unclip_image_interpolation.py
+++ b/examples/community/unclip_image_interpolation.py
@@ -19,7 +19,8 @@
UNet2DModel,
)
from diffusers.pipelines.unclip import UnCLIPTextProjModel
-from diffusers.utils import is_accelerate_available, logging, randn_tensor
+from diffusers.utils import is_accelerate_available, logging
+from diffusers.utils.torch_utils import randn_tensor
logger = logging.get_logger(__name__) # pylint: disable=invalid-name
diff --git a/examples/community/unclip_text_interpolation.py b/examples/community/unclip_text_interpolation.py
index 290f45317004..764299433b4c 100644
--- a/examples/community/unclip_text_interpolation.py
+++ b/examples/community/unclip_text_interpolation.py
@@ -15,7 +15,8 @@
UNet2DModel,
)
from diffusers.pipelines.unclip import UnCLIPTextProjModel
-from diffusers.utils import is_accelerate_available, logging, randn_tensor
+from diffusers.utils import is_accelerate_available, logging
+from diffusers.utils.torch_utils import randn_tensor
logger = logging.get_logger(__name__) # pylint: disable=invalid-name
diff --git a/src/diffusers/__init__.py b/src/diffusers/__init__.py
index d72c671671c1..87feab66503b 100644
--- a/src/diffusers/__init__.py
+++ b/src/diffusers/__init__.py
@@ -1,13 +1,12 @@
__version__ = "0.21.0.dev0"
-from .configuration_utils import ConfigMixin
+from typing import TYPE_CHECKING
+
from .utils import (
OptionalDependencyNotAvailable,
+ _LazyModule,
is_flax_available,
- is_inflect_available,
- is_invisible_watermark_available,
is_k_diffusion_available,
- is_k_diffusion_version,
is_librosa_available,
is_note_seq_available,
is_onnx_available,
@@ -15,272 +14,364 @@
is_torch_available,
is_torchsde_available,
is_transformers_available,
- is_transformers_version,
- is_unidecode_available,
- logging,
)
+# Lazy Import based on
+# https://github.com/huggingface/transformers/blob/main/src/transformers/__init__.py
+
+# When adding a new object to this init, please add it to `_import_structure`. The `_import_structure` is a dictionary submodule to list of object names,
+# and is used to defer the actual importing for when the objects are requested.
+# This way `import diffusers` provides the names in the namespace without actually importing anything (and especially none of the backends).
+
+_import_structure = {
+ "configuration_utils": ["ConfigMixin"],
+ "models": [],
+ "pipelines": [],
+ "schedulers": [],
+ "utils": [
+ "OptionalDependencyNotAvailable",
+ "is_flax_available",
+ "is_inflect_available",
+ "is_invisible_watermark_available",
+ "is_k_diffusion_available",
+ "is_k_diffusion_version",
+ "is_librosa_available",
+ "is_note_seq_available",
+ "is_onnx_available",
+ "is_scipy_available",
+ "is_torch_available",
+ "is_torchsde_available",
+ "is_transformers_available",
+ "is_transformers_version",
+ "is_unidecode_available",
+ "logging",
+ ],
+}
+
try:
if not is_onnx_available():
raise OptionalDependencyNotAvailable()
except OptionalDependencyNotAvailable:
- from .utils.dummy_onnx_objects import * # noqa F403
+ from .utils import dummy_onnx_objects # noqa F403
+
+ _import_structure["utils.dummy_onnx_objects"] = [
+ name for name in dir(dummy_onnx_objects) if not name.startswith("_")
+ ]
+
else:
- from .pipelines import OnnxRuntimeModel
+ _import_structure["pipelines"].extend(["OnnxRuntimeModel"])
try:
if not is_torch_available():
raise OptionalDependencyNotAvailable()
except OptionalDependencyNotAvailable:
- from .utils.dummy_pt_objects import * # noqa F403
+ from .utils import dummy_pt_objects # noqa F403
+
+ _import_structure["utils.dummy_pt_objects"] = [name for name in dir(dummy_pt_objects) if not name.startswith("_")]
+
else:
- from .models import (
- AsymmetricAutoencoderKL,
- AutoencoderKL,
- AutoencoderTiny,
- ControlNetModel,
- ModelMixin,
- MultiAdapter,
- PriorTransformer,
- T2IAdapter,
- T5FilmDecoder,
- Transformer2DModel,
- UNet1DModel,
- UNet2DConditionModel,
- UNet2DModel,
- UNet3DConditionModel,
- VQModel,
- )
- from .optimization import (
- get_constant_schedule,
- get_constant_schedule_with_warmup,
- get_cosine_schedule_with_warmup,
- get_cosine_with_hard_restarts_schedule_with_warmup,
- get_linear_schedule_with_warmup,
- get_polynomial_decay_schedule_with_warmup,
- get_scheduler,
+ _import_structure["models"].extend(
+ [
+ "AsymmetricAutoencoderKL",
+ "AutoencoderKL",
+ "AutoencoderTiny",
+ "ControlNetModel",
+ "ModelMixin",
+ "MultiAdapter",
+ "PriorTransformer",
+ "T2IAdapter",
+ "T5FilmDecoder",
+ "Transformer2DModel",
+ "UNet1DModel",
+ "UNet2DConditionModel",
+ "UNet2DModel",
+ "UNet3DConditionModel",
+ "VQModel",
+ ]
)
- from .pipelines import (
- AudioPipelineOutput,
- AutoPipelineForImage2Image,
- AutoPipelineForInpainting,
- AutoPipelineForText2Image,
- CLIPImageProjection,
- ConsistencyModelPipeline,
- DanceDiffusionPipeline,
- DDIMPipeline,
- DDPMPipeline,
- DiffusionPipeline,
- DiTPipeline,
- ImagePipelineOutput,
- KarrasVePipeline,
- LDMPipeline,
- LDMSuperResolutionPipeline,
- PNDMPipeline,
- RePaintPipeline,
- ScoreSdeVePipeline,
+ _import_structure["optimization"] = [
+ "get_constant_schedule",
+ "get_constant_schedule_with_warmup",
+ "get_cosine_schedule_with_warmup",
+ "get_cosine_with_hard_restarts_schedule_with_warmup",
+ "get_linear_schedule_with_warmup",
+ "get_polynomial_decay_schedule_with_warmup",
+ "get_scheduler",
+ ]
+
+ _import_structure["pipelines"].extend(
+ [
+ "AudioPipelineOutput",
+ "AutoPipelineForImage2Image",
+ "AutoPipelineForInpainting",
+ "AutoPipelineForText2Image",
+ "ConsistencyModelPipeline",
+ "DanceDiffusionPipeline",
+ "DDIMPipeline",
+ "DDPMPipeline",
+ "DiffusionPipeline",
+ "DiTPipeline",
+ "ImagePipelineOutput",
+ "KarrasVePipeline",
+ "LDMPipeline",
+ "LDMSuperResolutionPipeline",
+ "PNDMPipeline",
+ "RePaintPipeline",
+ "ScoreSdeVePipeline",
+ ]
)
- from .schedulers import (
- CMStochasticIterativeScheduler,
- DDIMInverseScheduler,
- DDIMParallelScheduler,
- DDIMScheduler,
- DDPMParallelScheduler,
- DDPMScheduler,
- DDPMWuerstchenScheduler,
- DEISMultistepScheduler,
- DPMSolverMultistepInverseScheduler,
- DPMSolverMultistepScheduler,
- DPMSolverSinglestepScheduler,
- EulerAncestralDiscreteScheduler,
- EulerDiscreteScheduler,
- HeunDiscreteScheduler,
- IPNDMScheduler,
- KarrasVeScheduler,
- KDPM2AncestralDiscreteScheduler,
- KDPM2DiscreteScheduler,
- PNDMScheduler,
- RePaintScheduler,
- SchedulerMixin,
- ScoreSdeVeScheduler,
- UnCLIPScheduler,
- UniPCMultistepScheduler,
- VQDiffusionScheduler,
+ _import_structure["schedulers"].extend(
+ [
+ "CMStochasticIterativeScheduler",
+ "DDIMInverseScheduler",
+ "DDIMParallelScheduler",
+ "DDIMScheduler",
+ "DDPMParallelScheduler",
+ "DDPMScheduler",
+ "DDPMWuerstchenScheduler",
+ "DEISMultistepScheduler",
+ "DPMSolverMultistepInverseScheduler",
+ "DPMSolverMultistepScheduler",
+ "DPMSolverSinglestepScheduler",
+ "EulerAncestralDiscreteScheduler",
+ "EulerDiscreteScheduler",
+ "HeunDiscreteScheduler",
+ "IPNDMScheduler",
+ "KarrasVeScheduler",
+ "KDPM2AncestralDiscreteScheduler",
+ "KDPM2DiscreteScheduler",
+ "PNDMScheduler",
+ "RePaintScheduler",
+ "SchedulerMixin",
+ "ScoreSdeVeScheduler",
+ "UnCLIPScheduler",
+ "UniPCMultistepScheduler",
+ "VQDiffusionScheduler",
+ ]
)
- from .training_utils import EMAModel
+ _import_structure["training_utils"] = ["EMAModel"]
try:
if not (is_torch_available() and is_scipy_available()):
raise OptionalDependencyNotAvailable()
except OptionalDependencyNotAvailable:
- from .utils.dummy_torch_and_scipy_objects import * # noqa F403
+ from .utils import dummy_torch_and_scipy_objects # noqa F403
+
+ _import_structure["utils.dummy_torch_and_scipy_objects"] = [
+ name for name in dir(dummy_torch_and_scipy_objects) if not name.startswith("_")
+ ]
+
else:
- from .schedulers import LMSDiscreteScheduler
+ _import_structure["schedulers"].extend(["LMSDiscreteScheduler"])
try:
if not (is_torch_available() and is_torchsde_available()):
raise OptionalDependencyNotAvailable()
except OptionalDependencyNotAvailable:
- from .utils.dummy_torch_and_torchsde_objects import * # noqa F403
+ from .utils import dummy_torch_and_torchsde_objects # noqa F403
+
+ _import_structure["utils.dummy_torch_and_torchsde_objects"] = [
+ name for name in dir(dummy_torch_and_torchsde_objects) if not name.startswith("_")
+ ]
+
else:
- from .schedulers import DPMSolverSDEScheduler
+ _import_structure["schedulers"].extend(["DPMSolverSDEScheduler"])
try:
if not (is_torch_available() and is_transformers_available()):
raise OptionalDependencyNotAvailable()
except OptionalDependencyNotAvailable:
- from .utils.dummy_torch_and_transformers_objects import * # noqa F403
+ from .utils import dummy_torch_and_transformers_objects # noqa F403
+
+ _import_structure["utils.dummy_torch_and_transformers_objects"] = [
+ name for name in dir(dummy_torch_and_transformers_objects) if not name.startswith("_")
+ ]
+
else:
- from .pipelines import (
- AltDiffusionImg2ImgPipeline,
- AltDiffusionPipeline,
- AudioLDM2Pipeline,
- AudioLDM2ProjectionModel,
- AudioLDM2UNet2DConditionModel,
- AudioLDMPipeline,
- CycleDiffusionPipeline,
- IFImg2ImgPipeline,
- IFImg2ImgSuperResolutionPipeline,
- IFInpaintingPipeline,
- IFInpaintingSuperResolutionPipeline,
- IFPipeline,
- IFSuperResolutionPipeline,
- ImageTextPipelineOutput,
- KandinskyCombinedPipeline,
- KandinskyImg2ImgCombinedPipeline,
- KandinskyImg2ImgPipeline,
- KandinskyInpaintCombinedPipeline,
- KandinskyInpaintPipeline,
- KandinskyPipeline,
- KandinskyPriorPipeline,
- KandinskyV22CombinedPipeline,
- KandinskyV22ControlnetImg2ImgPipeline,
- KandinskyV22ControlnetPipeline,
- KandinskyV22Img2ImgCombinedPipeline,
- KandinskyV22Img2ImgPipeline,
- KandinskyV22InpaintCombinedPipeline,
- KandinskyV22InpaintPipeline,
- KandinskyV22Pipeline,
- KandinskyV22PriorEmb2EmbPipeline,
- KandinskyV22PriorPipeline,
- LDMTextToImagePipeline,
- MusicLDMPipeline,
- PaintByExamplePipeline,
- SemanticStableDiffusionPipeline,
- ShapEImg2ImgPipeline,
- ShapEPipeline,
- StableDiffusionAdapterPipeline,
- StableDiffusionAttendAndExcitePipeline,
- StableDiffusionControlNetImg2ImgPipeline,
- StableDiffusionControlNetInpaintPipeline,
- StableDiffusionControlNetPipeline,
- StableDiffusionDepth2ImgPipeline,
- StableDiffusionDiffEditPipeline,
- StableDiffusionGLIGENPipeline,
- StableDiffusionGLIGENTextImagePipeline,
- StableDiffusionImageVariationPipeline,
- StableDiffusionImg2ImgPipeline,
- StableDiffusionInpaintPipeline,
- StableDiffusionInpaintPipelineLegacy,
- StableDiffusionInstructPix2PixPipeline,
- StableDiffusionLatentUpscalePipeline,
- StableDiffusionLDM3DPipeline,
- StableDiffusionModelEditingPipeline,
- StableDiffusionPanoramaPipeline,
- StableDiffusionParadigmsPipeline,
- StableDiffusionPipeline,
- StableDiffusionPipelineSafe,
- StableDiffusionPix2PixZeroPipeline,
- StableDiffusionSAGPipeline,
- StableDiffusionUpscalePipeline,
- StableDiffusionXLAdapterPipeline,
- StableDiffusionXLControlNetImg2ImgPipeline,
- StableDiffusionXLControlNetInpaintPipeline,
- StableDiffusionXLControlNetPipeline,
- StableDiffusionXLImg2ImgPipeline,
- StableDiffusionXLInpaintPipeline,
- StableDiffusionXLInstructPix2PixPipeline,
- StableDiffusionXLPipeline,
- StableUnCLIPImg2ImgPipeline,
- StableUnCLIPPipeline,
- TextToVideoSDPipeline,
- TextToVideoZeroPipeline,
- UnCLIPImageVariationPipeline,
- UnCLIPPipeline,
- UniDiffuserModel,
- UniDiffuserPipeline,
- UniDiffuserTextDecoder,
- VersatileDiffusionDualGuidedPipeline,
- VersatileDiffusionImageVariationPipeline,
- VersatileDiffusionPipeline,
- VersatileDiffusionTextToImagePipeline,
- VideoToVideoSDPipeline,
- VQDiffusionPipeline,
- WuerstchenCombinedPipeline,
- WuerstchenDecoderPipeline,
- WuerstchenPriorPipeline,
+ _import_structure["pipelines"].extend(
+ [
+ "AltDiffusionImg2ImgPipeline",
+ "AltDiffusionPipeline",
+ "AudioLDM2Pipeline",
+ "AudioLDM2ProjectionModel",
+ "AudioLDM2UNet2DConditionModel",
+ "AudioLDMPipeline",
+ "CycleDiffusionPipeline",
+ "IFImg2ImgPipeline",
+ "IFImg2ImgSuperResolutionPipeline",
+ "IFInpaintingPipeline",
+ "IFInpaintingSuperResolutionPipeline",
+ "IFPipeline",
+ "IFSuperResolutionPipeline",
+ "ImageTextPipelineOutput",
+ "KandinskyCombinedPipeline",
+ "KandinskyImg2ImgCombinedPipeline",
+ "KandinskyImg2ImgPipeline",
+ "KandinskyInpaintCombinedPipeline",
+ "KandinskyInpaintPipeline",
+ "KandinskyPipeline",
+ "KandinskyPriorPipeline",
+ "KandinskyV22CombinedPipeline",
+ "KandinskyV22ControlnetImg2ImgPipeline",
+ "KandinskyV22ControlnetPipeline",
+ "KandinskyV22Img2ImgCombinedPipeline",
+ "KandinskyV22Img2ImgPipeline",
+ "KandinskyV22InpaintCombinedPipeline",
+ "KandinskyV22InpaintPipeline",
+ "KandinskyV22Pipeline",
+ "KandinskyV22PriorEmb2EmbPipeline",
+ "KandinskyV22PriorPipeline",
+ "LDMTextToImagePipeline",
+ "MusicLDMPipeline",
+ "PaintByExamplePipeline",
+ "SemanticStableDiffusionPipeline",
+ "ShapEImg2ImgPipeline",
+ "ShapEPipeline",
+ "StableDiffusionAdapterPipeline",
+ "StableDiffusionAttendAndExcitePipeline",
+ "StableDiffusionControlNetImg2ImgPipeline",
+ "StableDiffusionControlNetInpaintPipeline",
+ "StableDiffusionControlNetPipeline",
+ "StableDiffusionDepth2ImgPipeline",
+ "StableDiffusionDiffEditPipeline",
+ "StableDiffusionGLIGENPipeline",
+ "StableDiffusionGLIGENPipeline",
+ "StableDiffusionGLIGENTextImagePipeline",
+ "StableDiffusionImageVariationPipeline",
+ "StableDiffusionImg2ImgPipeline",
+ "StableDiffusionInpaintPipeline",
+ "StableDiffusionInpaintPipelineLegacy",
+ "StableDiffusionInstructPix2PixPipeline",
+ "StableDiffusionLatentUpscalePipeline",
+ "StableDiffusionLDM3DPipeline",
+ "StableDiffusionModelEditingPipeline",
+ "StableDiffusionPanoramaPipeline",
+ "StableDiffusionParadigmsPipeline",
+ "StableDiffusionPipeline",
+ "StableDiffusionPipelineSafe",
+ "StableDiffusionPix2PixZeroPipeline",
+ "StableDiffusionSAGPipeline",
+ "StableDiffusionUpscalePipeline",
+ "StableDiffusionXLAdapterPipeline",
+ "StableDiffusionXLControlNetImg2ImgPipeline",
+ "StableDiffusionXLControlNetInpaintPipeline",
+ "StableDiffusionXLControlNetPipeline",
+ "StableDiffusionXLImg2ImgPipeline",
+ "StableDiffusionXLInpaintPipeline",
+ "StableDiffusionXLInstructPix2PixPipeline",
+ "StableDiffusionXLPipeline",
+ "StableUnCLIPImg2ImgPipeline",
+ "StableUnCLIPPipeline",
+ "TextToVideoSDPipeline",
+ "TextToVideoZeroPipeline",
+ "UnCLIPImageVariationPipeline",
+ "UnCLIPPipeline",
+ "UniDiffuserModel",
+ "UniDiffuserPipeline",
+ "UniDiffuserTextDecoder",
+ "VersatileDiffusionDualGuidedPipeline",
+ "VersatileDiffusionImageVariationPipeline",
+ "VersatileDiffusionPipeline",
+ "VersatileDiffusionTextToImagePipeline",
+ "VideoToVideoSDPipeline",
+ "VQDiffusionPipeline",
+ "WuerstchenCombinedPipeline",
+ "WuerstchenDecoderPipeline",
+ "WuerstchenPriorPipeline",
+ ]
)
try:
if not (is_torch_available() and is_transformers_available() and is_k_diffusion_available()):
raise OptionalDependencyNotAvailable()
except OptionalDependencyNotAvailable:
- from .utils.dummy_torch_and_transformers_and_k_diffusion_objects import * # noqa F403
+ from .utils import dummy_torch_and_transformers_and_k_diffusion_objects # noqa F403
+
+ _import_structure["utils.dummy_torch_and_transformers_and_k_diffusion_objects"] = [
+ name for name in dir(dummy_torch_and_transformers_and_k_diffusion_objects) if not name.startswith("_")
+ ]
+
else:
- from .pipelines import StableDiffusionKDiffusionPipeline
+ _import_structure["pipelines"].extend(["StableDiffusionKDiffusionPipeline"])
try:
if not (is_torch_available() and is_transformers_available() and is_onnx_available()):
raise OptionalDependencyNotAvailable()
except OptionalDependencyNotAvailable:
- from .utils.dummy_torch_and_transformers_and_onnx_objects import * # noqa F403
+ from .utils import dummy_torch_and_transformers_and_onnx_objects # noqa F403
+
+ _import_structure["utils.dummy_torch_and_transformers_and_onnx_objects"] = [
+ name for name in dir(dummy_torch_and_transformers_and_onnx_objects) if not name.startswith("_")
+ ]
+
else:
- from .pipelines import (
- OnnxStableDiffusionImg2ImgPipeline,
- OnnxStableDiffusionInpaintPipeline,
- OnnxStableDiffusionInpaintPipelineLegacy,
- OnnxStableDiffusionPipeline,
- OnnxStableDiffusionUpscalePipeline,
- StableDiffusionOnnxPipeline,
+ _import_structure["pipelines"].extend(
+ [
+ "OnnxStableDiffusionImg2ImgPipeline",
+ "OnnxStableDiffusionInpaintPipeline",
+ "OnnxStableDiffusionInpaintPipelineLegacy",
+ "OnnxStableDiffusionPipeline",
+ "OnnxStableDiffusionUpscalePipeline",
+ "StableDiffusionOnnxPipeline",
+ ]
)
try:
if not (is_torch_available() and is_librosa_available()):
raise OptionalDependencyNotAvailable()
except OptionalDependencyNotAvailable:
- from .utils.dummy_torch_and_librosa_objects import * # noqa F403
+ from .utils import dummy_torch_and_librosa_objects # noqa F403
+
+ _import_structure["utils.dummy_torch_and_librosa_objects"] = [
+ name for name in dir(dummy_torch_and_librosa_objects) if not name.startswith("_")
+ ]
+
else:
- from .pipelines import AudioDiffusionPipeline, Mel
+ _import_structure["pipelines"].extend(["AudioDiffusionPipeline", "Mel"])
try:
if not (is_transformers_available() and is_torch_available() and is_note_seq_available()):
raise OptionalDependencyNotAvailable()
except OptionalDependencyNotAvailable:
- from .utils.dummy_transformers_and_torch_and_note_seq_objects import * # noqa F403
+ from .utils import dummy_transformers_and_torch_and_note_seq_objects # noqa F403
+
+ _import_structure["utils.dummy_transformers_and_torch_and_note_seq_objects"] = [
+ name for name in dir(dummy_transformers_and_torch_and_note_seq_objects) if not name.startswith("_")
+ ]
+
+
else:
- from .pipelines import SpectrogramDiffusionPipeline
+ _import_structure["pipelines"].extend(["SpectrogramDiffusionPipeline"])
try:
if not is_flax_available():
raise OptionalDependencyNotAvailable()
except OptionalDependencyNotAvailable:
- from .utils.dummy_flax_objects import * # noqa F403
+ from .utils import dummy_flax_objects # noqa F403
+
+ _import_structure["utils.dummy_flax_objects"] = [
+ name for name in dir(dummy_flax_objects) if not name.startswith("_")
+ ]
+
+
else:
- from .models.controlnet_flax import FlaxControlNetModel
- from .models.modeling_flax_utils import FlaxModelMixin
- from .models.unet_2d_condition_flax import FlaxUNet2DConditionModel
- from .models.vae_flax import FlaxAutoencoderKL
- from .pipelines import FlaxDiffusionPipeline
- from .schedulers import (
- FlaxDDIMScheduler,
- FlaxDDPMScheduler,
- FlaxDPMSolverMultistepScheduler,
- FlaxKarrasVeScheduler,
- FlaxLMSDiscreteScheduler,
- FlaxPNDMScheduler,
- FlaxSchedulerMixin,
- FlaxScoreSdeVeScheduler,
+ _import_structure["models.controlnet_flax"] = ["FlaxControlNetModel"]
+ _import_structure["models.modeling_flax_utils"] = ["FlaxModelMixin"]
+ _import_structure["models.unet_2d_condition_flax"] = ["FlaxUNet2DConditionModel"]
+ _import_structure["models.vae_flax"] = ["FlaxAutoencoderKL"]
+ _import_structure["pipelines"].extend(["FlaxDiffusionPipeline"])
+ _import_structure["schedulers"].extend(
+ [
+ "FlaxDDIMScheduler",
+ "FlaxDDPMScheduler",
+ "FlaxDPMSolverMultistepScheduler",
+ "FlaxKarrasVeScheduler",
+ "FlaxLMSDiscreteScheduler",
+ "FlaxPNDMScheduler",
+ "FlaxSchedulerMixin",
+ "FlaxScoreSdeVeScheduler",
+ ]
)
@@ -288,19 +379,330 @@
if not (is_flax_available() and is_transformers_available()):
raise OptionalDependencyNotAvailable()
except OptionalDependencyNotAvailable:
- from .utils.dummy_flax_and_transformers_objects import * # noqa F403
+ from .utils import dummy_flax_and_transformers_objects # noqa F403
+
+ _import_structure["utils.dummy_flax_and_transformers_objects"] = [
+ name for name in dir(dummy_flax_and_transformers_objects) if not name.startswith("_")
+ ]
+
+
else:
- from .pipelines import (
- FlaxStableDiffusionControlNetPipeline,
- FlaxStableDiffusionImg2ImgPipeline,
- FlaxStableDiffusionInpaintPipeline,
- FlaxStableDiffusionPipeline,
+ _import_structure["pipelines"].extend(
+ [
+ "FlaxStableDiffusionControlNetPipeline",
+ "FlaxStableDiffusionImg2ImgPipeline",
+ "FlaxStableDiffusionInpaintPipeline",
+ "FlaxStableDiffusionPipeline",
+ ]
)
try:
if not (is_note_seq_available()):
raise OptionalDependencyNotAvailable()
except OptionalDependencyNotAvailable:
- from .utils.dummy_note_seq_objects import * # noqa F403
+ from .utils import dummy_note_seq_objects # noqa F403
+
+ _import_structure["utils.dummy_note_seq_objects"] = [
+ name for name in dir(dummy_note_seq_objects) if not name.startswith("_")
+ ]
+
+
else:
- from .pipelines import MidiProcessor
+ _import_structure["pipelines"].extend(["MidiProcessor"])
+
+if TYPE_CHECKING:
+ from .configuration_utils import ConfigMixin
+
+ try:
+ if not is_onnx_available():
+ raise OptionalDependencyNotAvailable()
+ except OptionalDependencyNotAvailable:
+ from .utils.dummy_onnx_objects import * # noqa F403
+ else:
+ from .pipelines import OnnxRuntimeModel
+
+ try:
+ if not is_torch_available():
+ raise OptionalDependencyNotAvailable()
+ except OptionalDependencyNotAvailable:
+ from .utils.dummy_pt_objects import * # noqa F403
+ else:
+ from .models import (
+ AsymmetricAutoencoderKL,
+ AutoencoderKL,
+ AutoencoderTiny,
+ ControlNetModel,
+ ModelMixin,
+ MultiAdapter,
+ PriorTransformer,
+ T2IAdapter,
+ T5FilmDecoder,
+ Transformer2DModel,
+ UNet1DModel,
+ UNet2DConditionModel,
+ UNet2DModel,
+ UNet3DConditionModel,
+ VQModel,
+ )
+ from .optimization import (
+ get_constant_schedule,
+ get_constant_schedule_with_warmup,
+ get_cosine_schedule_with_warmup,
+ get_cosine_with_hard_restarts_schedule_with_warmup,
+ get_linear_schedule_with_warmup,
+ get_polynomial_decay_schedule_with_warmup,
+ get_scheduler,
+ )
+ from .pipelines import (
+ AudioPipelineOutput,
+ AutoPipelineForImage2Image,
+ AutoPipelineForInpainting,
+ AutoPipelineForText2Image,
+ CLIPImageProjection,
+ ConsistencyModelPipeline,
+ DanceDiffusionPipeline,
+ DDIMPipeline,
+ DDPMPipeline,
+ DiffusionPipeline,
+ DiTPipeline,
+ ImagePipelineOutput,
+ KarrasVePipeline,
+ LDMPipeline,
+ LDMSuperResolutionPipeline,
+ PNDMPipeline,
+ RePaintPipeline,
+ ScoreSdeVePipeline,
+ )
+ from .schedulers import (
+ CMStochasticIterativeScheduler,
+ DDIMInverseScheduler,
+ DDIMParallelScheduler,
+ DDIMScheduler,
+ DDPMParallelScheduler,
+ DDPMScheduler,
+ DDPMWuerstchenScheduler,
+ DEISMultistepScheduler,
+ DPMSolverMultistepInverseScheduler,
+ DPMSolverMultistepScheduler,
+ DPMSolverSinglestepScheduler,
+ EulerAncestralDiscreteScheduler,
+ EulerDiscreteScheduler,
+ HeunDiscreteScheduler,
+ IPNDMScheduler,
+ KarrasVeScheduler,
+ KDPM2AncestralDiscreteScheduler,
+ KDPM2DiscreteScheduler,
+ PNDMScheduler,
+ RePaintScheduler,
+ SchedulerMixin,
+ ScoreSdeVeScheduler,
+ UnCLIPScheduler,
+ UniPCMultistepScheduler,
+ VQDiffusionScheduler,
+ )
+ from .training_utils import EMAModel
+
+ try:
+ if not (is_torch_available() and is_scipy_available()):
+ raise OptionalDependencyNotAvailable()
+ except OptionalDependencyNotAvailable:
+ from .utils.dummy_torch_and_scipy_objects import * # noqa F403
+ else:
+ from .schedulers import LMSDiscreteScheduler
+
+ try:
+ if not (is_torch_available() and is_torchsde_available()):
+ raise OptionalDependencyNotAvailable()
+ except OptionalDependencyNotAvailable:
+ from .utils.dummy_torch_and_torchsde_objects import * # noqa F403
+ else:
+ from .schedulers import DPMSolverSDEScheduler
+
+ try:
+ if not (is_torch_available() and is_transformers_available()):
+ raise OptionalDependencyNotAvailable()
+ except OptionalDependencyNotAvailable:
+ from .utils.dummy_torch_and_transformers_objects import * # noqa F403
+ else:
+ from .pipelines import (
+ AltDiffusionImg2ImgPipeline,
+ AltDiffusionPipeline,
+ AudioLDM2Pipeline,
+ AudioLDM2ProjectionModel,
+ AudioLDM2UNet2DConditionModel,
+ AudioLDMPipeline,
+ CycleDiffusionPipeline,
+ IFImg2ImgPipeline,
+ IFImg2ImgSuperResolutionPipeline,
+ IFInpaintingPipeline,
+ IFInpaintingSuperResolutionPipeline,
+ IFPipeline,
+ IFSuperResolutionPipeline,
+ ImageTextPipelineOutput,
+ KandinskyCombinedPipeline,
+ KandinskyImg2ImgCombinedPipeline,
+ KandinskyImg2ImgPipeline,
+ KandinskyInpaintCombinedPipeline,
+ KandinskyInpaintPipeline,
+ KandinskyPipeline,
+ KandinskyPriorPipeline,
+ KandinskyV22CombinedPipeline,
+ KandinskyV22ControlnetImg2ImgPipeline,
+ KandinskyV22ControlnetPipeline,
+ KandinskyV22Img2ImgCombinedPipeline,
+ KandinskyV22Img2ImgPipeline,
+ KandinskyV22InpaintCombinedPipeline,
+ KandinskyV22InpaintPipeline,
+ KandinskyV22Pipeline,
+ KandinskyV22PriorEmb2EmbPipeline,
+ KandinskyV22PriorPipeline,
+ LDMTextToImagePipeline,
+ MusicLDMPipeline,
+ PaintByExamplePipeline,
+ SemanticStableDiffusionPipeline,
+ ShapEImg2ImgPipeline,
+ ShapEPipeline,
+ StableDiffusionAdapterPipeline,
+ StableDiffusionAttendAndExcitePipeline,
+ StableDiffusionControlNetImg2ImgPipeline,
+ StableDiffusionControlNetInpaintPipeline,
+ StableDiffusionControlNetPipeline,
+ StableDiffusionDepth2ImgPipeline,
+ StableDiffusionDiffEditPipeline,
+ StableDiffusionGLIGENPipeline,
+ StableDiffusionGLIGENTextImagePipeline,
+ StableDiffusionImageVariationPipeline,
+ StableDiffusionImg2ImgPipeline,
+ StableDiffusionInpaintPipeline,
+ StableDiffusionInpaintPipelineLegacy,
+ StableDiffusionInstructPix2PixPipeline,
+ StableDiffusionLatentUpscalePipeline,
+ StableDiffusionLDM3DPipeline,
+ StableDiffusionModelEditingPipeline,
+ StableDiffusionPanoramaPipeline,
+ StableDiffusionParadigmsPipeline,
+ StableDiffusionPipeline,
+ StableDiffusionPipelineSafe,
+ StableDiffusionPix2PixZeroPipeline,
+ StableDiffusionSAGPipeline,
+ StableDiffusionUpscalePipeline,
+ StableDiffusionXLAdapterPipeline,
+ StableDiffusionXLControlNetImg2ImgPipeline,
+ StableDiffusionXLControlNetInpaintPipeline,
+ StableDiffusionXLControlNetPipeline,
+ StableDiffusionXLImg2ImgPipeline,
+ StableDiffusionXLInpaintPipeline,
+ StableDiffusionXLInstructPix2PixPipeline,
+ StableDiffusionXLPipeline,
+ StableUnCLIPImg2ImgPipeline,
+ StableUnCLIPPipeline,
+ TextToVideoSDPipeline,
+ TextToVideoZeroPipeline,
+ UnCLIPImageVariationPipeline,
+ UnCLIPPipeline,
+ UniDiffuserModel,
+ UniDiffuserPipeline,
+ UniDiffuserTextDecoder,
+ VersatileDiffusionDualGuidedPipeline,
+ VersatileDiffusionImageVariationPipeline,
+ VersatileDiffusionPipeline,
+ VersatileDiffusionTextToImagePipeline,
+ VideoToVideoSDPipeline,
+ VQDiffusionPipeline,
+ WuerstchenCombinedPipeline,
+ WuerstchenDecoderPipeline,
+ WuerstchenPriorPipeline,
+ )
+
+ try:
+ if not (is_torch_available() and is_transformers_available() and is_k_diffusion_available()):
+ raise OptionalDependencyNotAvailable()
+ except OptionalDependencyNotAvailable:
+ from .utils.dummy_torch_and_transformers_and_k_diffusion_objects import * # noqa F403
+ else:
+ from .pipelines import StableDiffusionKDiffusionPipeline
+
+ try:
+ if not (is_torch_available() and is_transformers_available() and is_onnx_available()):
+ raise OptionalDependencyNotAvailable()
+ except OptionalDependencyNotAvailable:
+ from .utils.dummy_torch_and_transformers_and_onnx_objects import * # noqa F403
+ else:
+ from .pipelines import (
+ OnnxStableDiffusionImg2ImgPipeline,
+ OnnxStableDiffusionInpaintPipeline,
+ OnnxStableDiffusionInpaintPipelineLegacy,
+ OnnxStableDiffusionPipeline,
+ OnnxStableDiffusionUpscalePipeline,
+ StableDiffusionOnnxPipeline,
+ )
+
+ try:
+ if not (is_torch_available() and is_librosa_available()):
+ raise OptionalDependencyNotAvailable()
+ except OptionalDependencyNotAvailable:
+ from .utils.dummy_torch_and_librosa_objects import * # noqa F403
+ else:
+ from .pipelines import AudioDiffusionPipeline, Mel
+
+ try:
+ if not (is_transformers_available() and is_torch_available() and is_note_seq_available()):
+ raise OptionalDependencyNotAvailable()
+ except OptionalDependencyNotAvailable:
+ from .utils.dummy_transformers_and_torch_and_note_seq_objects import * # noqa F403
+ else:
+ from .pipelines import SpectrogramDiffusionPipeline
+
+ try:
+ if not is_flax_available():
+ raise OptionalDependencyNotAvailable()
+ except OptionalDependencyNotAvailable:
+ from .utils.dummy_flax_objects import * # noqa F403
+ else:
+ from .models.controlnet_flax import FlaxControlNetModel
+ from .models.modeling_flax_utils import FlaxModelMixin
+ from .models.unet_2d_condition_flax import FlaxUNet2DConditionModel
+ from .models.vae_flax import FlaxAutoencoderKL
+ from .pipelines import FlaxDiffusionPipeline
+ from .schedulers import (
+ FlaxDDIMScheduler,
+ FlaxDDPMScheduler,
+ FlaxDPMSolverMultistepScheduler,
+ FlaxKarrasVeScheduler,
+ FlaxLMSDiscreteScheduler,
+ FlaxPNDMScheduler,
+ FlaxSchedulerMixin,
+ FlaxScoreSdeVeScheduler,
+ )
+
+ try:
+ if not (is_flax_available() and is_transformers_available()):
+ raise OptionalDependencyNotAvailable()
+ except OptionalDependencyNotAvailable:
+ from .utils.dummy_flax_and_transformers_objects import * # noqa F403
+ else:
+ from .pipelines import (
+ FlaxStableDiffusionControlNetPipeline,
+ FlaxStableDiffusionImg2ImgPipeline,
+ FlaxStableDiffusionInpaintPipeline,
+ FlaxStableDiffusionPipeline,
+ )
+
+ try:
+ if not (is_note_seq_available()):
+ raise OptionalDependencyNotAvailable()
+ except OptionalDependencyNotAvailable:
+ from .utils.dummy_note_seq_objects import * # noqa F403
+ else:
+ from .pipelines import MidiProcessor
+
+else:
+ import sys
+
+ sys.modules[__name__] = _LazyModule(
+ __name__,
+ globals()["__file__"],
+ _import_structure,
+ module_spec=__spec__,
+ extra_objects={"__version__": __version__},
+ )
diff --git a/src/diffusers/experimental/rl/value_guided_sampling.py b/src/diffusers/experimental/rl/value_guided_sampling.py
index e58952aa207f..262039be4fdb 100644
--- a/src/diffusers/experimental/rl/value_guided_sampling.py
+++ b/src/diffusers/experimental/rl/value_guided_sampling.py
@@ -18,8 +18,8 @@
from ...models.unet_1d import UNet1DModel
from ...pipelines import DiffusionPipeline
-from ...utils import randn_tensor
from ...utils.dummy_pt_objects import DDPMScheduler
+from ...utils.torch_utils import randn_tensor
class ValueGuidedRLPipeline(DiffusionPipeline):
diff --git a/src/diffusers/models/__init__.py b/src/diffusers/models/__init__.py
index 54e77df0ff72..fc60ff845ccf 100644
--- a/src/diffusers/models/__init__.py
+++ b/src/diffusers/models/__init__.py
@@ -12,27 +12,35 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-from ..utils import is_flax_available, is_torch_available
+from ..utils import _LazyModule, is_flax_available, is_torch_available
+_import_structure = {}
+
if is_torch_available():
- from .adapter import MultiAdapter, T2IAdapter
- from .autoencoder_asym_kl import AsymmetricAutoencoderKL
- from .autoencoder_kl import AutoencoderKL
- from .autoencoder_tiny import AutoencoderTiny
- from .controlnet import ControlNetModel
- from .dual_transformer_2d import DualTransformer2DModel
- from .modeling_utils import ModelMixin
- from .prior_transformer import PriorTransformer
- from .t5_film_transformer import T5FilmDecoder
- from .transformer_2d import Transformer2DModel
- from .unet_1d import UNet1DModel
- from .unet_2d import UNet2DModel
- from .unet_2d_condition import UNet2DConditionModel
- from .unet_3d_condition import UNet3DConditionModel
- from .vq_model import VQModel
+ _import_structure["adapter"] = ["MultiAdapter", "T2IAdapter"]
+ _import_structure["autoencoder_asym_kl"] = ["AsymmetricAutoencoderKL"]
+ _import_structure["autoencoder_kl"] = ["AutoencoderKL"]
+ _import_structure["autoencoder_tiny"] = ["AutoencoderTiny"]
+ _import_structure["controlnet"] = ["ControlNetModel"]
+ _import_structure["dual_transformer_2d"] = ["DualTransformer2DModel"]
+ _import_structure["modeling_utils"] = ["ModelMixin"]
+ _import_structure["prior_transformer"] = ["PriorTransformer"]
+ _import_structure["t5_film_transformer"] = ["T5FilmDecoder"]
+ _import_structure["transformer_2d"] = ["Transformer2DModel"]
+ _import_structure["transformer_temporal"] = ["TransformerTemporalModel"]
+ _import_structure["unet_1d"] = ["UNet1DModel"]
+ _import_structure["unet_2d"] = ["UNet2DModel"]
+ _import_structure["unet_2d_condition"] = ["UNet2DConditionModel"]
+ _import_structure["unet_3d_condition"] = ["UNet3DConditionModel"]
+ _import_structure["vq_model"] = ["VQModel"]
if is_flax_available():
- from .controlnet_flax import FlaxControlNetModel
- from .unet_2d_condition_flax import FlaxUNet2DConditionModel
- from .vae_flax import FlaxAutoencoderKL
+ _import_structure["controlnet_flax"] = ["FlaxControlNetModel"]
+ _import_structure["unet_2d_condition_flax"] = ["FlaxUNet2DConditionModel"]
+ _import_structure["vae_flax"] = ["FlaxAutoencoderKL"]
+
+import sys
+
+
+sys.modules[__name__] = _LazyModule(__name__, globals()["__file__"], _import_structure, module_spec=__spec__)
diff --git a/src/diffusers/models/attention.py b/src/diffusers/models/attention.py
index 185b87f2046a..892d44a03137 100644
--- a/src/diffusers/models/attention.py
+++ b/src/diffusers/models/attention.py
@@ -17,7 +17,7 @@
import torch.nn.functional as F
from torch import nn
-from ..utils import maybe_allow_in_graph
+from ..utils.torch_utils import maybe_allow_in_graph
from .activations import get_activation
from .attention_processor import Attention
from .embeddings import CombinedTimestepLabelEmbeddings
diff --git a/src/diffusers/models/attention_processor.py b/src/diffusers/models/attention_processor.py
index 49fc2c638620..c8e7dc66802c 100644
--- a/src/diffusers/models/attention_processor.py
+++ b/src/diffusers/models/attention_processor.py
@@ -18,8 +18,9 @@
import torch.nn.functional as F
from torch import nn
-from ..utils import deprecate, logging, maybe_allow_in_graph
+from ..utils import deprecate, logging
from ..utils.import_utils import is_xformers_available
+from ..utils.torch_utils import maybe_allow_in_graph
from .lora import LoRACompatibleLinear, LoRALinearLayer
diff --git a/src/diffusers/models/autoencoder_asym_kl.py b/src/diffusers/models/autoencoder_asym_kl.py
index e286cb215dbf..d8099120918b 100644
--- a/src/diffusers/models/autoencoder_asym_kl.py
+++ b/src/diffusers/models/autoencoder_asym_kl.py
@@ -17,7 +17,7 @@
import torch.nn as nn
from ..configuration_utils import ConfigMixin, register_to_config
-from ..utils import apply_forward_hook
+from ..utils.accelerate_utils import apply_forward_hook
from .autoencoder_kl import AutoencoderKLOutput
from .modeling_utils import ModelMixin
from .vae import DecoderOutput, DiagonalGaussianDistribution, Encoder, MaskConditionDecoder
diff --git a/src/diffusers/models/autoencoder_kl.py b/src/diffusers/models/autoencoder_kl.py
index 72157e5827b4..76666a4cc295 100644
--- a/src/diffusers/models/autoencoder_kl.py
+++ b/src/diffusers/models/autoencoder_kl.py
@@ -19,7 +19,8 @@
from ..configuration_utils import ConfigMixin, register_to_config
from ..loaders import FromOriginalVAEMixin
-from ..utils import BaseOutput, apply_forward_hook
+from ..utils import BaseOutput
+from ..utils.accelerate_utils import apply_forward_hook
from .attention_processor import (
ADDED_KV_ATTENTION_PROCESSORS,
CROSS_ATTENTION_PROCESSORS,
diff --git a/src/diffusers/models/autoencoder_tiny.py b/src/diffusers/models/autoencoder_tiny.py
index ad36b7a2ce66..407b1906bba4 100644
--- a/src/diffusers/models/autoencoder_tiny.py
+++ b/src/diffusers/models/autoencoder_tiny.py
@@ -19,7 +19,8 @@
import torch
from ..configuration_utils import ConfigMixin, register_to_config
-from ..utils import BaseOutput, apply_forward_hook
+from ..utils import BaseOutput
+from ..utils.accelerate_utils import apply_forward_hook
from .modeling_utils import ModelMixin
from .vae import DecoderOutput, DecoderTiny, EncoderTiny
diff --git a/src/diffusers/models/vae.py b/src/diffusers/models/vae.py
index 220c0ce990c8..36983eefc01f 100644
--- a/src/diffusers/models/vae.py
+++ b/src/diffusers/models/vae.py
@@ -18,7 +18,8 @@
import torch
import torch.nn as nn
-from ..utils import BaseOutput, is_torch_version, randn_tensor
+from ..utils import BaseOutput, is_torch_version
+from ..utils.torch_utils import randn_tensor
from .activations import get_activation
from .attention_processor import SpatialNorm
from .unet_2d_blocks import AutoencoderTinyBlock, UNetMidBlock2D, get_down_block, get_up_block
diff --git a/src/diffusers/models/vq_model.py b/src/diffusers/models/vq_model.py
index 393a638d483b..0c15300af213 100644
--- a/src/diffusers/models/vq_model.py
+++ b/src/diffusers/models/vq_model.py
@@ -18,7 +18,8 @@
import torch.nn as nn
from ..configuration_utils import ConfigMixin, register_to_config
-from ..utils import BaseOutput, apply_forward_hook
+from ..utils import BaseOutput
+from ..utils.accelerate_utils import apply_forward_hook
from .modeling_utils import ModelMixin
from .vae import Decoder, DecoderOutput, Encoder, VectorQuantizer
diff --git a/src/diffusers/pipelines/__init__.py b/src/diffusers/pipelines/__init__.py
index 28f42ce9fae9..b237adae7d54 100644
--- a/src/diffusers/pipelines/__init__.py
+++ b/src/diffusers/pipelines/__init__.py
@@ -1,5 +1,7 @@
from ..utils import (
OptionalDependencyNotAvailable,
+ _LazyModule,
+ get_objects_from_module,
is_flax_available,
is_k_diffusion_available,
is_librosa_available,
@@ -10,187 +12,256 @@
)
+# These modules contain pipelines from multiple libraries/frameworks
+_import_structure = {"stable_diffusion": [], "latent_diffusion": [], "controlnet": []}
+_dummy_objects = {}
+
try:
if not is_torch_available():
raise OptionalDependencyNotAvailable()
except OptionalDependencyNotAvailable:
- from ..utils.dummy_pt_objects import * # noqa F403
+ from ..utils import dummy_pt_objects # noqa F403
+
+ _dummy_objects.update(get_objects_from_module(dummy_pt_objects))
+
else:
- from .auto_pipeline import AutoPipelineForImage2Image, AutoPipelineForInpainting, AutoPipelineForText2Image
- from .consistency_models import ConsistencyModelPipeline
- from .dance_diffusion import DanceDiffusionPipeline
- from .ddim import DDIMPipeline
- from .ddpm import DDPMPipeline
- from .dit import DiTPipeline
- from .latent_diffusion import LDMSuperResolutionPipeline
- from .latent_diffusion_uncond import LDMPipeline
- from .pipeline_utils import AudioPipelineOutput, DiffusionPipeline, ImagePipelineOutput
- from .pndm import PNDMPipeline
- from .repaint import RePaintPipeline
- from .score_sde_ve import ScoreSdeVePipeline
- from .stochastic_karras_ve import KarrasVePipeline
+ _import_structure["auto_pipeline"] = [
+ "AutoPipelineForImage2Image",
+ "AutoPipelineForInpainting",
+ "AutoPipelineForText2Image",
+ ]
+ _import_structure["consistency_models"] = ["ConsistencyModelPipeline"]
+ _import_structure["dance_diffusion"] = ["DanceDiffusionPipeline"]
+ _import_structure["ddim"] = ["DDIMPipeline"]
+ _import_structure["ddpm"] = ["DDPMPipeline"]
+ _import_structure["dit"] = ["DiTPipeline"]
+ _import_structure["latent_diffusion"].extend(["LDMSuperResolutionPipeline"])
+ _import_structure["latent_diffusion_uncond"] = ["LDMPipeline"]
+ _import_structure["pipeline_utils"] = ["AudioPipelineOutput", "DiffusionPipeline", "ImagePipelineOutput"]
+ _import_structure["pndm"] = ["PNDMPipeline"]
+ _import_structure["repaint"] = ["RePaintPipeline"]
+ _import_structure["score_sde_ve"] = ["ScoreSdeVePipeline"]
+ _import_structure["stochastic_karras_ve"] = ["KarrasVePipeline"]
try:
if not (is_torch_available() and is_librosa_available()):
raise OptionalDependencyNotAvailable()
except OptionalDependencyNotAvailable:
- from ..utils.dummy_torch_and_librosa_objects import * # noqa F403
+ from ..utils import dummy_torch_and_librosa_objects # noqa F403
+
+ _dummy_objects.update(get_objects_from_module(dummy_torch_and_librosa_objects))
+
else:
- from .audio_diffusion import AudioDiffusionPipeline, Mel
+ _import_structure["audio_diffusion"] = ["AudioDiffusionPipeline", "Mel"]
try:
if not (is_torch_available() and is_transformers_available()):
raise OptionalDependencyNotAvailable()
except OptionalDependencyNotAvailable:
- from ..utils.dummy_torch_and_transformers_objects import * # noqa F403
+ from ..utils import dummy_torch_and_transformers_objects # noqa F403
+
+ _dummy_objects.update(get_objects_from_module(dummy_torch_and_transformers_objects))
+
else:
- from .alt_diffusion import AltDiffusionImg2ImgPipeline, AltDiffusionPipeline
- from .audioldm import AudioLDMPipeline
- from .audioldm2 import AudioLDM2Pipeline, AudioLDM2ProjectionModel, AudioLDM2UNet2DConditionModel
- from .controlnet import (
- StableDiffusionControlNetImg2ImgPipeline,
- StableDiffusionControlNetInpaintPipeline,
- StableDiffusionControlNetPipeline,
- StableDiffusionXLControlNetImg2ImgPipeline,
- StableDiffusionXLControlNetInpaintPipeline,
- StableDiffusionXLControlNetPipeline,
- )
- from .deepfloyd_if import (
- IFImg2ImgPipeline,
- IFImg2ImgSuperResolutionPipeline,
- IFInpaintingPipeline,
- IFInpaintingSuperResolutionPipeline,
- IFPipeline,
- IFSuperResolutionPipeline,
+ _import_structure["alt_diffusion"] = ["AltDiffusionImg2ImgPipeline", "AltDiffusionPipeline"]
+ _import_structure["audioldm"] = ["AudioLDMPipeline"]
+ _import_structure["audioldm2"] = ["AudioLDM2Pipeline", "AudioLDM2ProjectionModel", "AudioLDM2UNet2DConditionModel"]
+ _import_structure["controlnet"].extend(
+ [
+ "StableDiffusionControlNetImg2ImgPipeline",
+ "StableDiffusionControlNetInpaintPipeline",
+ "StableDiffusionControlNetPipeline",
+ "StableDiffusionXLControlNetImg2ImgPipeline",
+ "StableDiffusionXLControlNetInpaintPipeline",
+ "StableDiffusionXLControlNetPipeline",
+ ]
)
- from .kandinsky import (
- KandinskyCombinedPipeline,
- KandinskyImg2ImgCombinedPipeline,
- KandinskyImg2ImgPipeline,
- KandinskyInpaintCombinedPipeline,
- KandinskyInpaintPipeline,
- KandinskyPipeline,
- KandinskyPriorPipeline,
+ _import_structure["deepfloyd_if"] = [
+ "IFImg2ImgPipeline",
+ "IFImg2ImgSuperResolutionPipeline",
+ "IFInpaintingPipeline",
+ "IFInpaintingSuperResolutionPipeline",
+ "IFPipeline",
+ "IFSuperResolutionPipeline",
+ ]
+ _import_structure["kandinsky"] = [
+ "KandinskyCombinedPipeline",
+ "KandinskyImg2ImgCombinedPipeline",
+ "KandinskyImg2ImgPipeline",
+ "KandinskyInpaintCombinedPipeline",
+ "KandinskyInpaintPipeline",
+ "KandinskyPipeline",
+ "KandinskyPriorPipeline",
+ ]
+ _import_structure["kandinsky2_2"] = [
+ "KandinskyV22CombinedPipeline",
+ "KandinskyV22ControlnetImg2ImgPipeline",
+ "KandinskyV22ControlnetPipeline",
+ "KandinskyV22Img2ImgCombinedPipeline",
+ "KandinskyV22Img2ImgPipeline",
+ "KandinskyV22InpaintCombinedPipeline",
+ "KandinskyV22InpaintPipeline",
+ "KandinskyV22Pipeline",
+ "KandinskyV22PriorEmb2EmbPipeline",
+ "KandinskyV22PriorPipeline",
+ ]
+ _import_structure["latent_diffusion"].extend(["LDMTextToImagePipeline"])
+ _import_structure["musicldm"] = ["MusicLDMPipeline"]
+ _import_structure["paint_by_example"] = ["PaintByExamplePipeline"]
+ _import_structure["semantic_stable_diffusion"] = ["SemanticStableDiffusionPipeline"]
+ _import_structure["shap_e"] = ["ShapEImg2ImgPipeline", "ShapEPipeline"]
+ _import_structure["stable_diffusion"].extend(
+ [
+ "CycleDiffusionPipeline",
+ "StableDiffusionAttendAndExcitePipeline",
+ "StableDiffusionDepth2ImgPipeline",
+ "StableDiffusionDiffEditPipeline",
+ "StableDiffusionGLIGENPipeline",
+ "StableDiffusionImageVariationPipeline",
+ "StableDiffusionImg2ImgPipeline",
+ "StableDiffusionInpaintPipeline",
+ "StableDiffusionInpaintPipelineLegacy",
+ "StableDiffusionInstructPix2PixPipeline",
+ "StableDiffusionLatentUpscalePipeline",
+ "StableDiffusionLDM3DPipeline",
+ "StableDiffusionModelEditingPipeline",
+ "StableDiffusionPanoramaPipeline",
+ "StableDiffusionParadigmsPipeline",
+ "StableDiffusionPipeline",
+ "StableDiffusionPix2PixZeroPipeline",
+ "StableDiffusionSAGPipeline",
+ "StableDiffusionUpscalePipeline",
+ "StableUnCLIPImg2ImgPipeline",
+ "StableUnCLIPPipeline",
+ "StableDiffusionGLIGENTextImagePipeline",
+ "StableDiffusionGLIGENPipeline",
+ ]
)
- from .kandinsky2_2 import (
- KandinskyV22CombinedPipeline,
- KandinskyV22ControlnetImg2ImgPipeline,
- KandinskyV22ControlnetPipeline,
- KandinskyV22Img2ImgCombinedPipeline,
- KandinskyV22Img2ImgPipeline,
- KandinskyV22InpaintCombinedPipeline,
- KandinskyV22InpaintPipeline,
- KandinskyV22Pipeline,
- KandinskyV22PriorEmb2EmbPipeline,
- KandinskyV22PriorPipeline,
- )
- from .latent_diffusion import LDMTextToImagePipeline
- from .musicldm import MusicLDMPipeline
- from .paint_by_example import PaintByExamplePipeline
- from .semantic_stable_diffusion import SemanticStableDiffusionPipeline
- from .shap_e import ShapEImg2ImgPipeline, ShapEPipeline
- from .stable_diffusion import (
- CycleDiffusionPipeline,
- StableDiffusionAttendAndExcitePipeline,
- StableDiffusionDepth2ImgPipeline,
- StableDiffusionDiffEditPipeline,
- StableDiffusionGLIGENPipeline,
- StableDiffusionGLIGENTextImagePipeline,
- StableDiffusionImageVariationPipeline,
- StableDiffusionImg2ImgPipeline,
- StableDiffusionInpaintPipeline,
- StableDiffusionInpaintPipelineLegacy,
- StableDiffusionInstructPix2PixPipeline,
- StableDiffusionLatentUpscalePipeline,
- StableDiffusionLDM3DPipeline,
- StableDiffusionModelEditingPipeline,
- StableDiffusionPanoramaPipeline,
- StableDiffusionParadigmsPipeline,
- StableDiffusionPipeline,
- StableDiffusionPix2PixZeroPipeline,
- StableDiffusionSAGPipeline,
- StableDiffusionUpscalePipeline,
- StableUnCLIPImg2ImgPipeline,
- StableUnCLIPPipeline,
- )
- from .stable_diffusion.clip_image_project_model import CLIPImageProjection
- from .stable_diffusion_safe import StableDiffusionPipelineSafe
- from .stable_diffusion_xl import (
- StableDiffusionXLImg2ImgPipeline,
- StableDiffusionXLInpaintPipeline,
- StableDiffusionXLInstructPix2PixPipeline,
- StableDiffusionXLPipeline,
- )
- from .t2i_adapter import StableDiffusionAdapterPipeline, StableDiffusionXLAdapterPipeline
- from .text_to_video_synthesis import TextToVideoSDPipeline, TextToVideoZeroPipeline, VideoToVideoSDPipeline
- from .unclip import UnCLIPImageVariationPipeline, UnCLIPPipeline
- from .unidiffuser import ImageTextPipelineOutput, UniDiffuserModel, UniDiffuserPipeline, UniDiffuserTextDecoder
- from .versatile_diffusion import (
- VersatileDiffusionDualGuidedPipeline,
- VersatileDiffusionImageVariationPipeline,
- VersatileDiffusionPipeline,
- VersatileDiffusionTextToImagePipeline,
- )
- from .vq_diffusion import VQDiffusionPipeline
- from .wuerstchen import WuerstchenCombinedPipeline, WuerstchenDecoderPipeline, WuerstchenPriorPipeline
+ _import_structure["stable_diffusion_safe"] = ["StableDiffusionPipelineSafe"]
+ _import_structure["stable_diffusion_xl"] = [
+ "StableDiffusionXLImg2ImgPipeline",
+ "StableDiffusionXLInpaintPipeline",
+ "StableDiffusionXLInstructPix2PixPipeline",
+ "StableDiffusionXLPipeline",
+ ]
+ _import_structure["t2i_adapter"] = ["StableDiffusionAdapterPipeline", "StableDiffusionXLAdapterPipeline"]
+ _import_structure["text_to_video_synthesis"] = [
+ "TextToVideoSDPipeline",
+ "TextToVideoZeroPipeline",
+ "VideoToVideoSDPipeline",
+ ]
+ _import_structure["unclip"] = ["UnCLIPImageVariationPipeline", "UnCLIPPipeline"]
+ _import_structure["unidiffuser"] = [
+ "ImageTextPipelineOutput",
+ "UniDiffuserModel",
+ "UniDiffuserPipeline",
+ "UniDiffuserTextDecoder",
+ ]
+ _import_structure["versatile_diffusion"] = [
+ "VersatileDiffusionDualGuidedPipeline",
+ "VersatileDiffusionImageVariationPipeline",
+ "VersatileDiffusionPipeline",
+ "VersatileDiffusionTextToImagePipeline",
+ ]
+ _import_structure["vq_diffusion"] = ["VQDiffusionPipeline"]
+ _import_structure["wuerstchen"] = [
+ "WuerstchenCombinedPipeline",
+ "WuerstchenDecoderPipeline",
+ "WuerstchenPriorPipeline",
+ ]
try:
if not is_onnx_available():
raise OptionalDependencyNotAvailable()
except OptionalDependencyNotAvailable:
- from ..utils.dummy_onnx_objects import * # noqa F403
+ from ..utils import dummy_onnx_objects # noqa F403
+
+ _dummy_objects.update(get_objects_from_module(dummy_onnx_objects))
+
else:
- from .onnx_utils import OnnxRuntimeModel
+ _import_structure["onnx_utils"] = ["OnnxRuntimeModel"]
try:
if not (is_torch_available() and is_transformers_available() and is_onnx_available()):
raise OptionalDependencyNotAvailable()
except OptionalDependencyNotAvailable:
- from ..utils.dummy_torch_and_transformers_and_onnx_objects import * # noqa F403
+ from ..utils import dummy_torch_and_transformers_and_onnx_objects # noqa F403
+
+ _dummy_objects.update(get_objects_from_module(dummy_torch_and_transformers_and_onnx_objects))
+
else:
- from .stable_diffusion import (
- OnnxStableDiffusionImg2ImgPipeline,
- OnnxStableDiffusionInpaintPipeline,
- OnnxStableDiffusionInpaintPipelineLegacy,
- OnnxStableDiffusionPipeline,
- OnnxStableDiffusionUpscalePipeline,
- StableDiffusionOnnxPipeline,
+ _import_structure["stable_diffusion"].extend(
+ [
+ "OnnxStableDiffusionImg2ImgPipeline",
+ "OnnxStableDiffusionInpaintPipeline",
+ "OnnxStableDiffusionInpaintPipelineLegacy",
+ "OnnxStableDiffusionPipeline",
+ "OnnxStableDiffusionUpscalePipeline",
+ "StableDiffusionOnnxPipeline",
+ ]
)
try:
if not (is_torch_available() and is_transformers_available() and is_k_diffusion_available()):
raise OptionalDependencyNotAvailable()
except OptionalDependencyNotAvailable:
- from ..utils.dummy_torch_and_transformers_and_k_diffusion_objects import * # noqa F403
+ from ..utils import dummy_torch_and_transformers_and_k_diffusion_objects # noqa F403
+
+ _dummy_objects.update(get_objects_from_module(dummy_torch_and_transformers_and_k_diffusion_objects))
+
else:
- from .stable_diffusion import StableDiffusionKDiffusionPipeline
+ _import_structure["stable_diffusion"].extend(["StableDiffusionKDiffusionPipeline"])
try:
if not is_flax_available():
raise OptionalDependencyNotAvailable()
except OptionalDependencyNotAvailable:
- from ..utils.dummy_flax_objects import * # noqa F403
+ from ..utils import dummy_flax_objects # noqa F403
+
+ _dummy_objects.update(get_objects_from_module(dummy_flax_objects))
+
else:
- from .pipeline_flax_utils import FlaxDiffusionPipeline
+ _import_structure["pipeline_flax_utils"] = ["FlaxDiffusionPipeline"]
try:
if not (is_flax_available() and is_transformers_available()):
raise OptionalDependencyNotAvailable()
except OptionalDependencyNotAvailable:
- from ..utils.dummy_flax_and_transformers_objects import * # noqa F403
+ from ..utils import dummy_flax_and_transformers_objects # noqa F403
+
+ _dummy_objects.update(get_objects_from_module(dummy_flax_and_transformers_objects))
+
else:
- from .controlnet import FlaxStableDiffusionControlNetPipeline
- from .stable_diffusion import (
- FlaxStableDiffusionImg2ImgPipeline,
- FlaxStableDiffusionInpaintPipeline,
- FlaxStableDiffusionPipeline,
+ _import_structure["controlnet"].extend(["FlaxStableDiffusionControlNetPipeline"])
+ _import_structure["stable_diffusion"].extend(
+ [
+ "FlaxStableDiffusionImg2ImgPipeline",
+ "FlaxStableDiffusionInpaintPipeline",
+ "FlaxStableDiffusionPipeline",
+ ]
)
try:
if not (is_transformers_available() and is_torch_available() and is_note_seq_available()):
raise OptionalDependencyNotAvailable()
except OptionalDependencyNotAvailable:
- from ..utils.dummy_transformers_and_torch_and_note_seq_objects import * # noqa F403
+ from ..utils import dummy_transformers_and_torch_and_note_seq_objects # noqa F403
+
+ _dummy_objects.update(get_objects_from_module(dummy_transformers_and_torch_and_note_seq_objects))
+
else:
- from .spectrogram_diffusion import MidiProcessor, SpectrogramDiffusionPipeline
+ _import_structure["spectrogram_diffusion"] = ["MidiProcessor", "SpectrogramDiffusionPipeline"]
+
+
+import sys
+
+
+sys.modules[__name__] = _LazyModule(
+ __name__,
+ globals()["__file__"],
+ _import_structure,
+ module_spec=__spec__,
+)
+for name, value in _dummy_objects.items():
+ setattr(sys.modules[__name__], name, value)
diff --git a/src/diffusers/pipelines/alt_diffusion/__init__.py b/src/diffusers/pipelines/alt_diffusion/__init__.py
index 03c9f5ebc63e..c2e4db7eab1c 100644
--- a/src/diffusers/pipelines/alt_diffusion/__init__.py
+++ b/src/diffusers/pipelines/alt_diffusion/__init__.py
@@ -1,38 +1,37 @@
-from dataclasses import dataclass
-from typing import List, Optional, Union
+from ...utils import (
+ OptionalDependencyNotAvailable,
+ _LazyModule,
+ get_objects_from_module,
+ is_torch_available,
+ is_transformers_available,
+)
-import numpy as np
-import PIL
-from PIL import Image
-
-from ...utils import BaseOutput, OptionalDependencyNotAvailable, is_torch_available, is_transformers_available
-
-
-@dataclass
-# Copied from diffusers.pipelines.stable_diffusion.__init__.StableDiffusionPipelineOutput with Stable->Alt
-class AltDiffusionPipelineOutput(BaseOutput):
- """
- Output class for Alt Diffusion pipelines.
-
- Args:
- images (`List[PIL.Image.Image]` or `np.ndarray`)
- List of denoised PIL images of length `batch_size` or NumPy array of shape `(batch_size, height, width,
- num_channels)`.
- nsfw_content_detected (`List[bool]`)
- List indicating whether the corresponding generated image contains "not-safe-for-work" (nsfw) content or
- `None` if safety checking could not be performed.
- """
-
- images: Union[List[PIL.Image.Image], np.ndarray]
- nsfw_content_detected: Optional[List[bool]]
+_import_structure = {}
+_dummy_objects = {}
try:
if not (is_transformers_available() and is_torch_available()):
raise OptionalDependencyNotAvailable()
except OptionalDependencyNotAvailable:
- from ...utils.dummy_torch_and_transformers_objects import ShapEPipeline
+ from ...utils import dummy_torch_and_transformers_objects
+
+ _dummy_objects.update(get_objects_from_module(dummy_torch_and_transformers_objects))
+
else:
- from .modeling_roberta_series import RobertaSeriesModelWithTransformation
- from .pipeline_alt_diffusion import AltDiffusionPipeline
- from .pipeline_alt_diffusion_img2img import AltDiffusionImg2ImgPipeline
+ _import_structure["pipeline_output"] = ["AltDiffusionPipelineOutput"]
+ _import_structure["modeling_roberta_series"] = ["RobertaSeriesModelWithTransformation"]
+ _import_structure["pipeline_alt_diffusion"] = ["AltDiffusionPipeline"]
+ _import_structure["pipeline_alt_diffusion_img2img"] = ["AltDiffusionImg2ImgPipeline"]
+
+import sys
+
+
+sys.modules[__name__] = _LazyModule(
+ __name__,
+ globals()["__file__"],
+ _import_structure,
+ module_spec=__spec__,
+)
+for name, value in _dummy_objects.items():
+ setattr(sys.modules[__name__], name, value)
diff --git a/src/diffusers/pipelines/alt_diffusion/pipeline_alt_diffusion.py b/src/diffusers/pipelines/alt_diffusion/pipeline_alt_diffusion.py
index 78e46990b50c..7af8027ed763 100644
--- a/src/diffusers/pipelines/alt_diffusion/pipeline_alt_diffusion.py
+++ b/src/diffusers/pipelines/alt_diffusion/pipeline_alt_diffusion.py
@@ -27,7 +27,8 @@
from ...models import AutoencoderKL, UNet2DConditionModel
from ...models.lora import adjust_lora_scale_text_encoder
from ...schedulers import KarrasDiffusionSchedulers
-from ...utils import deprecate, logging, randn_tensor, replace_example_docstring
+from ...utils import deprecate, logging, replace_example_docstring
+from ...utils.torch_utils import randn_tensor
from ..pipeline_utils import DiffusionPipeline
from ..stable_diffusion.safety_checker import StableDiffusionSafetyChecker
from . import AltDiffusionPipelineOutput, RobertaSeriesModelWithTransformation
diff --git a/src/diffusers/pipelines/alt_diffusion/pipeline_alt_diffusion_img2img.py b/src/diffusers/pipelines/alt_diffusion/pipeline_alt_diffusion_img2img.py
index 5713395639cc..a7219446d273 100644
--- a/src/diffusers/pipelines/alt_diffusion/pipeline_alt_diffusion_img2img.py
+++ b/src/diffusers/pipelines/alt_diffusion/pipeline_alt_diffusion_img2img.py
@@ -29,7 +29,8 @@
from ...models import AutoencoderKL, UNet2DConditionModel
from ...models.lora import adjust_lora_scale_text_encoder
from ...schedulers import KarrasDiffusionSchedulers
-from ...utils import PIL_INTERPOLATION, deprecate, logging, randn_tensor, replace_example_docstring
+from ...utils import PIL_INTERPOLATION, deprecate, logging, replace_example_docstring
+from ...utils.torch_utils import randn_tensor
from ..pipeline_utils import DiffusionPipeline
from ..stable_diffusion.safety_checker import StableDiffusionSafetyChecker
from . import AltDiffusionPipelineOutput, RobertaSeriesModelWithTransformation
diff --git a/src/diffusers/pipelines/alt_diffusion/pipeline_output.py b/src/diffusers/pipelines/alt_diffusion/pipeline_output.py
new file mode 100644
index 000000000000..220c7f358402
--- /dev/null
+++ b/src/diffusers/pipelines/alt_diffusion/pipeline_output.py
@@ -0,0 +1,28 @@
+from dataclasses import dataclass
+from typing import List, Optional, Union
+
+import numpy as np
+import PIL
+
+from ...utils import (
+ BaseOutput,
+)
+
+
+@dataclass
+# Copied from diffusers.pipelines.stable_diffusion.pipeline_output.StableDiffusionPipelineOutput with Stable->Alt
+class AltDiffusionPipelineOutput(BaseOutput):
+ """
+ Output class for Alt Diffusion pipelines.
+
+ Args:
+ images (`List[PIL.Image.Image]` or `np.ndarray`)
+ List of denoised PIL images of length `batch_size` or NumPy array of shape `(batch_size, height, width,
+ num_channels)`.
+ nsfw_content_detected (`List[bool]`)
+ List indicating whether the corresponding generated image contains "not-safe-for-work" (nsfw) content or
+ `None` if safety checking could not be performed.
+ """
+
+ images: Union[List[PIL.Image.Image], np.ndarray]
+ nsfw_content_detected: Optional[List[bool]]
diff --git a/src/diffusers/pipelines/audio_diffusion/__init__.py b/src/diffusers/pipelines/audio_diffusion/__init__.py
index 58554c45ea52..578a94693382 100644
--- a/src/diffusers/pipelines/audio_diffusion/__init__.py
+++ b/src/diffusers/pipelines/audio_diffusion/__init__.py
@@ -1,2 +1,18 @@
-from .mel import Mel
-from .pipeline_audio_diffusion import AudioDiffusionPipeline
+from ...utils import _LazyModule
+
+
+_import_structure = {}
+_dummy_objects = {}
+
+_import_structure["mel"] = ["Mel"]
+_import_structure["pipeline_audio_diffusion"] = ["AudioDiffusionPipeline"]
+
+import sys
+
+
+sys.modules[__name__] = _LazyModule(
+ __name__,
+ globals()["__file__"],
+ _import_structure,
+ module_spec=__spec__,
+)
diff --git a/src/diffusers/pipelines/audio_diffusion/pipeline_audio_diffusion.py b/src/diffusers/pipelines/audio_diffusion/pipeline_audio_diffusion.py
index 74737560cd8e..a06217c19bf7 100644
--- a/src/diffusers/pipelines/audio_diffusion/pipeline_audio_diffusion.py
+++ b/src/diffusers/pipelines/audio_diffusion/pipeline_audio_diffusion.py
@@ -22,7 +22,7 @@
from ...models import AutoencoderKL, UNet2DConditionModel
from ...schedulers import DDIMScheduler, DDPMScheduler
-from ...utils import randn_tensor
+from ...utils.torch_utils import randn_tensor
from ..pipeline_utils import AudioPipelineOutput, BaseOutput, DiffusionPipeline, ImagePipelineOutput
from .mel import Mel
diff --git a/src/diffusers/pipelines/audioldm/__init__.py b/src/diffusers/pipelines/audioldm/__init__.py
index 8ddef6c3f325..2acd5c25ed75 100644
--- a/src/diffusers/pipelines/audioldm/__init__.py
+++ b/src/diffusers/pipelines/audioldm/__init__.py
@@ -1,11 +1,16 @@
from ...utils import (
OptionalDependencyNotAvailable,
+ _LazyModule,
is_torch_available,
is_transformers_available,
is_transformers_version,
)
+_import_structure = {}
+_dummy_objects = {}
+
+
try:
if not (is_transformers_available() and is_torch_available() and is_transformers_version(">=", "4.27.0")):
raise OptionalDependencyNotAvailable()
@@ -13,5 +18,21 @@
from ...utils.dummy_torch_and_transformers_objects import (
AudioLDMPipeline,
)
+
+ _dummy_objects.update({"AudioLDMPipeline": AudioLDMPipeline})
+
else:
- from .pipeline_audioldm import AudioLDMPipeline
+ _import_structure["pipeline_audioldm"] = ["AudioLDMPipeline"]
+
+import sys
+
+
+sys.modules[__name__] = _LazyModule(
+ __name__,
+ globals()["__file__"],
+ _import_structure,
+ module_spec=__spec__,
+)
+
+for name, value in _dummy_objects.items():
+ setattr(sys.modules[__name__], name, value)
diff --git a/src/diffusers/pipelines/audioldm/pipeline_audioldm.py b/src/diffusers/pipelines/audioldm/pipeline_audioldm.py
index f577f51dd5ab..c95e45000133 100644
--- a/src/diffusers/pipelines/audioldm/pipeline_audioldm.py
+++ b/src/diffusers/pipelines/audioldm/pipeline_audioldm.py
@@ -22,7 +22,8 @@
from ...models import AutoencoderKL, UNet2DConditionModel
from ...schedulers import KarrasDiffusionSchedulers
-from ...utils import logging, randn_tensor, replace_example_docstring
+from ...utils import logging, replace_example_docstring
+from ...utils.torch_utils import randn_tensor
from ..pipeline_utils import AudioPipelineOutput, DiffusionPipeline
diff --git a/src/diffusers/pipelines/audioldm2/__init__.py b/src/diffusers/pipelines/audioldm2/__init__.py
index 3917a6eb2116..67001f8e44ca 100644
--- a/src/diffusers/pipelines/audioldm2/__init__.py
+++ b/src/diffusers/pipelines/audioldm2/__init__.py
@@ -1,20 +1,34 @@
from ...utils import (
OptionalDependencyNotAvailable,
+ _LazyModule,
+ get_objects_from_module,
is_torch_available,
is_transformers_available,
is_transformers_version,
)
+_import_structure = {}
+_dummy_objects = {}
+
+
try:
if not (is_transformers_available() and is_torch_available() and is_transformers_version(">=", "4.27.0")):
raise OptionalDependencyNotAvailable()
except OptionalDependencyNotAvailable:
- from ...utils.dummy_torch_and_transformers_objects import (
- AudioLDM2Pipeline,
- AudioLDM2ProjectionModel,
- AudioLDM2UNet2DConditionModel,
- )
+ from ...utils import dummy_torch_and_transformers_objects
+
+ _dummy_objects.update(get_objects_from_module(dummy_torch_and_transformers_objects))
+
else:
- from .modeling_audioldm2 import AudioLDM2ProjectionModel, AudioLDM2UNet2DConditionModel
- from .pipeline_audioldm2 import AudioLDM2Pipeline
+ _import_structure["modeling_audioldm2"] = ["AudioLDM2ProjectionModel", "AudioLDM2UNet2DConditionModel"]
+ _import_structure["pipeline_audioldm2"] = ["AudioLDM2Pipeline"]
+
+ import sys
+
+ sys.modules[__name__] = _LazyModule(
+ __name__,
+ globals()["__file__"],
+ _import_structure,
+ module_spec=__spec__,
+ )
diff --git a/src/diffusers/pipelines/audioldm2/pipeline_audioldm2.py b/src/diffusers/pipelines/audioldm2/pipeline_audioldm2.py
index 224b2a731b38..e5e03036caec 100644
--- a/src/diffusers/pipelines/audioldm2/pipeline_audioldm2.py
+++ b/src/diffusers/pipelines/audioldm2/pipeline_audioldm2.py
@@ -36,9 +36,9 @@
is_accelerate_version,
is_librosa_available,
logging,
- randn_tensor,
replace_example_docstring,
)
+from ...utils.torch_utils import randn_tensor
from ..pipeline_utils import AudioPipelineOutput, DiffusionPipeline
from .modeling_audioldm2 import AudioLDM2ProjectionModel, AudioLDM2UNet2DConditionModel
diff --git a/src/diffusers/pipelines/consistency_models/__init__.py b/src/diffusers/pipelines/consistency_models/__init__.py
index fd78ddb3aae2..d1d2ab59500b 100644
--- a/src/diffusers/pipelines/consistency_models/__init__.py
+++ b/src/diffusers/pipelines/consistency_models/__init__.py
@@ -1 +1,17 @@
-from .pipeline_consistency_models import ConsistencyModelPipeline
+from ...utils import (
+ _LazyModule,
+)
+
+
+_import_structure = {}
+_import_structure["pipeline_consistency_models"] = ["ConsistencyModelPipeline"]
+
+import sys
+
+
+sys.modules[__name__] = _LazyModule(
+ __name__,
+ globals()["__file__"],
+ _import_structure,
+ module_spec=__spec__,
+)
diff --git a/src/diffusers/pipelines/consistency_models/pipeline_consistency_models.py b/src/diffusers/pipelines/consistency_models/pipeline_consistency_models.py
index 83cb37dc1e35..511c767aeaf4 100644
--- a/src/diffusers/pipelines/consistency_models/pipeline_consistency_models.py
+++ b/src/diffusers/pipelines/consistency_models/pipeline_consistency_models.py
@@ -8,9 +8,9 @@
is_accelerate_available,
is_accelerate_version,
logging,
- randn_tensor,
replace_example_docstring,
)
+from ...utils.torch_utils import randn_tensor
from ..pipeline_utils import DiffusionPipeline, ImagePipelineOutput
diff --git a/src/diffusers/pipelines/controlnet/__init__.py b/src/diffusers/pipelines/controlnet/__init__.py
index 0cd7b69fe618..60b3fa0b7539 100644
--- a/src/diffusers/pipelines/controlnet/__init__.py
+++ b/src/diffusers/pipelines/controlnet/__init__.py
@@ -1,25 +1,57 @@
from ...utils import (
OptionalDependencyNotAvailable,
+ _LazyModule,
+ get_objects_from_module,
is_flax_available,
is_torch_available,
is_transformers_available,
)
+_import_structure = {}
+_dummy_objects = {}
+
+
try:
if not (is_transformers_available() and is_torch_available()):
raise OptionalDependencyNotAvailable()
+
except OptionalDependencyNotAvailable:
- from ...utils.dummy_torch_and_transformers_objects import * # noqa F403
+ from ...utils import dummy_torch_and_transformers_objects # noqa F403
+
+ _dummy_objects.update(get_objects_from_module(dummy_torch_and_transformers_objects))
+
+
else:
- from .multicontrolnet import MultiControlNetModel
- from .pipeline_controlnet import StableDiffusionControlNetPipeline
- from .pipeline_controlnet_img2img import StableDiffusionControlNetImg2ImgPipeline
- from .pipeline_controlnet_inpaint import StableDiffusionControlNetInpaintPipeline
- from .pipeline_controlnet_inpaint_sd_xl import StableDiffusionXLControlNetInpaintPipeline
- from .pipeline_controlnet_sd_xl import StableDiffusionXLControlNetPipeline
- from .pipeline_controlnet_sd_xl_img2img import StableDiffusionXLControlNetImg2ImgPipeline
+ _import_structure["multicontrolnet"] = ["MultiControlNetModel"]
+ _import_structure["pipeline_controlnet"] = ["StableDiffusionControlNetPipeline"]
+ _import_structure["pipeline_controlnet_img2img"] = ["StableDiffusionControlNetImg2ImgPipeline"]
+ _import_structure["pipeline_controlnet_inpaint"] = ["StableDiffusionControlNetInpaintPipeline"]
+ _import_structure["pipeline_controlnet_sd_xl"] = ["StableDiffusionXLControlNetPipeline"]
+ _import_structure["pipeline_controlnet_sd_xl_img2img"] = ["StableDiffusionXLControlNetImg2ImgPipeline"]
+ _import_structure["pipeline_controlnet_inpaint_sd_xl"] = ["StableDiffusionXLControlNetInpaintPipeline"]
+
+try:
+ if not (is_transformers_available() and is_flax_available()):
+ raise OptionalDependencyNotAvailable()
+except OptionalDependencyNotAvailable:
+ from ...utils import dummy_flax_and_transformers_objects # noqa F403
+
+ _dummy_objects.update(get_objects_from_module(dummy_flax_and_transformers_objects))
+
+else:
+ _import_structure["pipeline_flax_controlnet"] = ["FlaxStableDiffusionControlNetPipeline"]
+
+import sys
+
+
+sys.modules[__name__] = _LazyModule(
+ __name__,
+ globals()["__file__"],
+ _import_structure,
+ module_spec=__spec__,
+)
-if is_transformers_available() and is_flax_available():
- from .pipeline_flax_controlnet import FlaxStableDiffusionControlNetPipeline
+for name, value in _dummy_objects.items():
+ setattr(sys.modules[__name__], name, value)
diff --git a/src/diffusers/pipelines/controlnet/pipeline_controlnet.py b/src/diffusers/pipelines/controlnet/pipeline_controlnet.py
index 82e3851377d9..bb569249e5f5 100644
--- a/src/diffusers/pipelines/controlnet/pipeline_controlnet.py
+++ b/src/diffusers/pipelines/controlnet/pipeline_controlnet.py
@@ -31,11 +31,10 @@
deprecate,
is_accelerate_available,
is_accelerate_version,
- is_compiled_module,
logging,
- randn_tensor,
replace_example_docstring,
)
+from ...utils.torch_utils import is_compiled_module, randn_tensor
from ..pipeline_utils import DiffusionPipeline
from ..stable_diffusion import StableDiffusionPipelineOutput
from ..stable_diffusion.safety_checker import StableDiffusionSafetyChecker
diff --git a/src/diffusers/pipelines/controlnet/pipeline_controlnet_img2img.py b/src/diffusers/pipelines/controlnet/pipeline_controlnet_img2img.py
index 88410ad0d7c3..7a173d98d279 100644
--- a/src/diffusers/pipelines/controlnet/pipeline_controlnet_img2img.py
+++ b/src/diffusers/pipelines/controlnet/pipeline_controlnet_img2img.py
@@ -30,11 +30,10 @@
deprecate,
is_accelerate_available,
is_accelerate_version,
- is_compiled_module,
logging,
- randn_tensor,
replace_example_docstring,
)
+from ...utils.torch_utils import is_compiled_module, randn_tensor
from ..pipeline_utils import DiffusionPipeline
from ..stable_diffusion import StableDiffusionPipelineOutput
from ..stable_diffusion.safety_checker import StableDiffusionSafetyChecker
diff --git a/src/diffusers/pipelines/controlnet/pipeline_controlnet_inpaint.py b/src/diffusers/pipelines/controlnet/pipeline_controlnet_inpaint.py
index f98e4bb20c3c..c933bf9ccee5 100644
--- a/src/diffusers/pipelines/controlnet/pipeline_controlnet_inpaint.py
+++ b/src/diffusers/pipelines/controlnet/pipeline_controlnet_inpaint.py
@@ -32,11 +32,10 @@
deprecate,
is_accelerate_available,
is_accelerate_version,
- is_compiled_module,
logging,
- randn_tensor,
replace_example_docstring,
)
+from ...utils.torch_utils import is_compiled_module, randn_tensor
from ..pipeline_utils import DiffusionPipeline
from ..stable_diffusion import StableDiffusionPipelineOutput
from ..stable_diffusion.safety_checker import StableDiffusionSafetyChecker
diff --git a/src/diffusers/pipelines/controlnet/pipeline_controlnet_inpaint_sd_xl.py b/src/diffusers/pipelines/controlnet/pipeline_controlnet_inpaint_sd_xl.py
index c64204501b97..9d0dd462ba7e 100644
--- a/src/diffusers/pipelines/controlnet/pipeline_controlnet_inpaint_sd_xl.py
+++ b/src/diffusers/pipelines/controlnet/pipeline_controlnet_inpaint_sd_xl.py
@@ -38,12 +38,11 @@
from ...utils import (
is_accelerate_available,
is_accelerate_version,
- is_compiled_module,
is_invisible_watermark_available,
logging,
- randn_tensor,
replace_example_docstring,
)
+from ...utils.torch_utils import is_compiled_module, randn_tensor
from ..pipeline_utils import DiffusionPipeline
from .multicontrolnet import MultiControlNetModel
diff --git a/src/diffusers/pipelines/controlnet/pipeline_controlnet_sd_xl.py b/src/diffusers/pipelines/controlnet/pipeline_controlnet_sd_xl.py
index ef6b54e81548..50e13b76d664 100644
--- a/src/diffusers/pipelines/controlnet/pipeline_controlnet_sd_xl.py
+++ b/src/diffusers/pipelines/controlnet/pipeline_controlnet_sd_xl.py
@@ -39,11 +39,10 @@
from ...utils import (
is_accelerate_available,
is_accelerate_version,
- is_compiled_module,
logging,
- randn_tensor,
replace_example_docstring,
)
+from ...utils.torch_utils import is_compiled_module, randn_tensor
from ..pipeline_utils import DiffusionPipeline
from ..stable_diffusion_xl import StableDiffusionXLPipelineOutput
diff --git a/src/diffusers/pipelines/controlnet/pipeline_controlnet_sd_xl_img2img.py b/src/diffusers/pipelines/controlnet/pipeline_controlnet_sd_xl_img2img.py
index 02f3d8e4b36d..ca3bc8ca7754 100644
--- a/src/diffusers/pipelines/controlnet/pipeline_controlnet_sd_xl_img2img.py
+++ b/src/diffusers/pipelines/controlnet/pipeline_controlnet_sd_xl_img2img.py
@@ -38,11 +38,10 @@
from ...utils import (
is_accelerate_available,
is_accelerate_version,
- is_compiled_module,
logging,
- randn_tensor,
replace_example_docstring,
)
+from ...utils.torch_utils import is_compiled_module, randn_tensor
from ..pipeline_utils import DiffusionPipeline
from ..stable_diffusion_xl import StableDiffusionXLPipelineOutput
diff --git a/src/diffusers/pipelines/dance_diffusion/__init__.py b/src/diffusers/pipelines/dance_diffusion/__init__.py
index 55d7f8ff9807..39f213b35a04 100644
--- a/src/diffusers/pipelines/dance_diffusion/__init__.py
+++ b/src/diffusers/pipelines/dance_diffusion/__init__.py
@@ -1 +1,16 @@
-from .pipeline_dance_diffusion import DanceDiffusionPipeline
+from ...utils import _LazyModule
+
+
+_import_structure = {}
+_import_structure["pipeline_dance_diffusion"] = ["DanceDiffusionPipeline"]
+
+
+import sys
+
+
+sys.modules[__name__] = _LazyModule(
+ __name__,
+ globals()["__file__"],
+ _import_structure,
+ module_spec=__spec__,
+)
diff --git a/src/diffusers/pipelines/dance_diffusion/pipeline_dance_diffusion.py b/src/diffusers/pipelines/dance_diffusion/pipeline_dance_diffusion.py
index b2d46c6f90f1..77c57a1425d3 100644
--- a/src/diffusers/pipelines/dance_diffusion/pipeline_dance_diffusion.py
+++ b/src/diffusers/pipelines/dance_diffusion/pipeline_dance_diffusion.py
@@ -17,7 +17,8 @@
import torch
-from ...utils import logging, randn_tensor
+from ...utils import logging
+from ...utils.torch_utils import randn_tensor
from ..pipeline_utils import AudioPipelineOutput, DiffusionPipeline
diff --git a/src/diffusers/pipelines/ddim/__init__.py b/src/diffusers/pipelines/ddim/__init__.py
index 85e8118e75e7..1715a2b6acbb 100644
--- a/src/diffusers/pipelines/ddim/__init__.py
+++ b/src/diffusers/pipelines/ddim/__init__.py
@@ -1 +1,15 @@
-from .pipeline_ddim import DDIMPipeline
+from ...utils import _LazyModule
+
+
+_import_structure = {}
+_import_structure["pipeline_ddim"] = ["DDIMPipeline"]
+
+import sys
+
+
+sys.modules[__name__] = _LazyModule(
+ __name__,
+ globals()["__file__"],
+ _import_structure,
+ module_spec=__spec__,
+)
diff --git a/src/diffusers/pipelines/ddim/pipeline_ddim.py b/src/diffusers/pipelines/ddim/pipeline_ddim.py
index 6eae78f2801e..dcb326ede058 100644
--- a/src/diffusers/pipelines/ddim/pipeline_ddim.py
+++ b/src/diffusers/pipelines/ddim/pipeline_ddim.py
@@ -17,7 +17,7 @@
import torch
from ...schedulers import DDIMScheduler
-from ...utils import randn_tensor
+from ...utils.torch_utils import randn_tensor
from ..pipeline_utils import DiffusionPipeline, ImagePipelineOutput
diff --git a/src/diffusers/pipelines/ddpm/__init__.py b/src/diffusers/pipelines/ddpm/__init__.py
index bb228ee012e8..a3936af03a6a 100644
--- a/src/diffusers/pipelines/ddpm/__init__.py
+++ b/src/diffusers/pipelines/ddpm/__init__.py
@@ -1 +1,17 @@
-from .pipeline_ddpm import DDPMPipeline
+from ...utils import (
+ _LazyModule,
+)
+
+
+_import_structure = {}
+_import_structure["pipeline_ddpm"] = ["DDPMPipeline"]
+
+import sys
+
+
+sys.modules[__name__] = _LazyModule(
+ __name__,
+ globals()["__file__"],
+ _import_structure,
+ module_spec=__spec__,
+)
diff --git a/src/diffusers/pipelines/ddpm/pipeline_ddpm.py b/src/diffusers/pipelines/ddpm/pipeline_ddpm.py
index 1e9ead0f3d39..d34bea7f9cf0 100644
--- a/src/diffusers/pipelines/ddpm/pipeline_ddpm.py
+++ b/src/diffusers/pipelines/ddpm/pipeline_ddpm.py
@@ -17,7 +17,7 @@
import torch
-from ...utils import randn_tensor
+from ...utils.torch_utils import randn_tensor
from ..pipeline_utils import DiffusionPipeline, ImagePipelineOutput
diff --git a/src/diffusers/pipelines/deepfloyd_if/__init__.py b/src/diffusers/pipelines/deepfloyd_if/__init__.py
index 93414f20e733..a6d58cab9c81 100644
--- a/src/diffusers/pipelines/deepfloyd_if/__init__.py
+++ b/src/diffusers/pipelines/deepfloyd_if/__init__.py
@@ -1,54 +1,55 @@
-from dataclasses import dataclass
-from typing import List, Optional, Union
-
-import numpy as np
-import PIL
-
-from ...utils import BaseOutput, OptionalDependencyNotAvailable, is_torch_available, is_transformers_available
-from .timesteps import (
- fast27_timesteps,
- smart27_timesteps,
- smart50_timesteps,
- smart100_timesteps,
- smart185_timesteps,
- super27_timesteps,
- super40_timesteps,
- super100_timesteps,
+from ...utils import (
+ OptionalDependencyNotAvailable,
+ _LazyModule,
+ get_objects_from_module,
+ is_torch_available,
+ is_transformers_available,
)
-@dataclass
-class IFPipelineOutput(BaseOutput):
- """
- Args:
- Output class for Stable Diffusion pipelines.
- images (`List[PIL.Image.Image]` or `np.ndarray`)
- List of denoised PIL images of length `batch_size` or numpy array of shape `(batch_size, height, width,
- num_channels)`. PIL images or numpy array present the denoised images of the diffusion pipeline.
- nsfw_detected (`List[bool]`)
- List of flags denoting whether the corresponding generated image likely represents "not-safe-for-work"
- (nsfw) content or a watermark. `None` if safety checking could not be performed.
- watermark_detected (`List[bool]`)
- List of flags denoting whether the corresponding generated image likely has a watermark. `None` if safety
- checking could not be performed.
- """
-
- images: Union[List[PIL.Image.Image], np.ndarray]
- nsfw_detected: Optional[List[bool]]
- watermark_detected: Optional[List[bool]]
+_import_structure = {}
+_dummy_objects = {}
+_import_structure["timesteps"] = [
+ "fast27_timesteps",
+ "smart27_timesteps",
+ "smart50_timesteps",
+ "smart100_timesteps",
+ "smart185_timesteps",
+ "super27_timesteps",
+ "super40_timesteps",
+ "super100_timesteps",
+]
try:
if not (is_transformers_available() and is_torch_available()):
raise OptionalDependencyNotAvailable()
except OptionalDependencyNotAvailable:
- from ...utils.dummy_torch_and_transformers_objects import * # noqa F403
+ from ...utils import dummy_torch_and_transformers_objects # noqa F403
+
+ _dummy_objects.update(get_objects_from_module(dummy_torch_and_transformers_objects))
+
else:
- from .pipeline_if import IFPipeline
- from .pipeline_if_img2img import IFImg2ImgPipeline
- from .pipeline_if_img2img_superresolution import IFImg2ImgSuperResolutionPipeline
- from .pipeline_if_inpainting import IFInpaintingPipeline
- from .pipeline_if_inpainting_superresolution import IFInpaintingSuperResolutionPipeline
- from .pipeline_if_superresolution import IFSuperResolutionPipeline
- from .safety_checker import IFSafetyChecker
- from .watermark import IFWatermarker
+ _import_structure["pipeline_output"] = ["IFPipelineOutput"]
+ _import_structure["pipeline_if"] = ["IFPipeline"]
+ _import_structure["pipeline_if_img2img"] = ["IFImg2ImgPipeline"]
+ _import_structure["pipeline_if_img2img_superresolution"] = ["IFImg2ImgSuperResolutionPipeline"]
+ _import_structure["pipeline_if_inpainting"] = ["IFInpaintingPipeline"]
+ _import_structure["pipeline_if_inpainting_superresolution"] = ["IFInpaintingSuperResolutionPipeline"]
+ _import_structure["pipeline_if_superresolution"] = ["IFSuperResolutionPipeline"]
+ _import_structure["safety_checker"] = ["IFSafetyChecker"]
+ _import_structure["watermark"] = ["IFWatermarker"]
+
+
+import sys
+
+
+sys.modules[__name__] = _LazyModule(
+ __name__,
+ globals()["__file__"],
+ _import_structure,
+ module_spec=__spec__,
+)
+
+for name, value in _dummy_objects.items():
+ setattr(sys.modules[__name__], name, value)
diff --git a/src/diffusers/pipelines/deepfloyd_if/pipeline_if.py b/src/diffusers/pipelines/deepfloyd_if/pipeline_if.py
index 50939644ebd7..0f4e702268d4 100644
--- a/src/diffusers/pipelines/deepfloyd_if/pipeline_if.py
+++ b/src/diffusers/pipelines/deepfloyd_if/pipeline_if.py
@@ -17,9 +17,9 @@
is_bs4_available,
is_ftfy_available,
logging,
- randn_tensor,
replace_example_docstring,
)
+from ...utils.torch_utils import randn_tensor
from ..pipeline_utils import DiffusionPipeline
from . import IFPipelineOutput
from .safety_checker import IFSafetyChecker
diff --git a/src/diffusers/pipelines/deepfloyd_if/pipeline_if_img2img.py b/src/diffusers/pipelines/deepfloyd_if/pipeline_if_img2img.py
index afd8f691ea68..e14133f0e481 100644
--- a/src/diffusers/pipelines/deepfloyd_if/pipeline_if_img2img.py
+++ b/src/diffusers/pipelines/deepfloyd_if/pipeline_if_img2img.py
@@ -20,9 +20,9 @@
is_bs4_available,
is_ftfy_available,
logging,
- randn_tensor,
replace_example_docstring,
)
+from ...utils.torch_utils import randn_tensor
from ..pipeline_utils import DiffusionPipeline
from . import IFPipelineOutput
from .safety_checker import IFSafetyChecker
diff --git a/src/diffusers/pipelines/deepfloyd_if/pipeline_if_img2img_superresolution.py b/src/diffusers/pipelines/deepfloyd_if/pipeline_if_img2img_superresolution.py
index d00a19c92421..20ac5a90e2cc 100644
--- a/src/diffusers/pipelines/deepfloyd_if/pipeline_if_img2img_superresolution.py
+++ b/src/diffusers/pipelines/deepfloyd_if/pipeline_if_img2img_superresolution.py
@@ -21,9 +21,9 @@
is_bs4_available,
is_ftfy_available,
logging,
- randn_tensor,
replace_example_docstring,
)
+from ...utils.torch_utils import randn_tensor
from ..pipeline_utils import DiffusionPipeline
from . import IFPipelineOutput
from .safety_checker import IFSafetyChecker
diff --git a/src/diffusers/pipelines/deepfloyd_if/pipeline_if_inpainting.py b/src/diffusers/pipelines/deepfloyd_if/pipeline_if_inpainting.py
index a15341e26b69..d54c9aedc6a5 100644
--- a/src/diffusers/pipelines/deepfloyd_if/pipeline_if_inpainting.py
+++ b/src/diffusers/pipelines/deepfloyd_if/pipeline_if_inpainting.py
@@ -20,9 +20,9 @@
is_bs4_available,
is_ftfy_available,
logging,
- randn_tensor,
replace_example_docstring,
)
+from ...utils.torch_utils import randn_tensor
from ..pipeline_utils import DiffusionPipeline
from . import IFPipelineOutput
from .safety_checker import IFSafetyChecker
diff --git a/src/diffusers/pipelines/deepfloyd_if/pipeline_if_inpainting_superresolution.py b/src/diffusers/pipelines/deepfloyd_if/pipeline_if_inpainting_superresolution.py
index e523e6d332dc..1217d2d8398f 100644
--- a/src/diffusers/pipelines/deepfloyd_if/pipeline_if_inpainting_superresolution.py
+++ b/src/diffusers/pipelines/deepfloyd_if/pipeline_if_inpainting_superresolution.py
@@ -21,9 +21,9 @@
is_bs4_available,
is_ftfy_available,
logging,
- randn_tensor,
replace_example_docstring,
)
+from ...utils.torch_utils import randn_tensor
from ..pipeline_utils import DiffusionPipeline
from . import IFPipelineOutput
from .safety_checker import IFSafetyChecker
diff --git a/src/diffusers/pipelines/deepfloyd_if/pipeline_if_superresolution.py b/src/diffusers/pipelines/deepfloyd_if/pipeline_if_superresolution.py
index eafdd6f0d28a..8e1a6338eaed 100644
--- a/src/diffusers/pipelines/deepfloyd_if/pipeline_if_superresolution.py
+++ b/src/diffusers/pipelines/deepfloyd_if/pipeline_if_superresolution.py
@@ -20,9 +20,9 @@
is_bs4_available,
is_ftfy_available,
logging,
- randn_tensor,
replace_example_docstring,
)
+from ...utils.torch_utils import randn_tensor
from ..pipeline_utils import DiffusionPipeline
from . import IFPipelineOutput
from .safety_checker import IFSafetyChecker
diff --git a/src/diffusers/pipelines/deepfloyd_if/pipeline_output.py b/src/diffusers/pipelines/deepfloyd_if/pipeline_output.py
new file mode 100644
index 000000000000..f33c4b9e46dd
--- /dev/null
+++ b/src/diffusers/pipelines/deepfloyd_if/pipeline_output.py
@@ -0,0 +1,28 @@
+from dataclasses import dataclass
+from typing import List, Optional, Union
+
+import numpy as np
+import PIL
+
+from ...utils import BaseOutput
+
+
+@dataclass
+class IFPipelineOutput(BaseOutput):
+ """
+ Args:
+ Output class for Stable Diffusion pipelines.
+ images (`List[PIL.Image.Image]` or `np.ndarray`)
+ List of denoised PIL images of length `batch_size` or numpy array of shape `(batch_size, height, width,
+ num_channels)`. PIL images or numpy array present the denoised images of the diffusion pipeline.
+ nsfw_detected (`List[bool]`)
+ List of flags denoting whether the corresponding generated image likely represents "not-safe-for-work"
+ (nsfw) content or a watermark. `None` if safety checking could not be performed.
+ watermark_detected (`List[bool]`)
+ List of flags denoting whether the corresponding generated image likely has a watermark. `None` if safety
+ checking could not be performed.
+ """
+
+ images: Union[List[PIL.Image.Image], np.ndarray]
+ nsfw_detected: Optional[List[bool]]
+ watermark_detected: Optional[List[bool]]
diff --git a/src/diffusers/pipelines/dit/__init__.py b/src/diffusers/pipelines/dit/__init__.py
index 4ef0729cb490..be3c74454393 100644
--- a/src/diffusers/pipelines/dit/__init__.py
+++ b/src/diffusers/pipelines/dit/__init__.py
@@ -1 +1,15 @@
-from .pipeline_dit import DiTPipeline
+from ...utils import _LazyModule
+
+
+_import_structure = {}
+_import_structure["pipeline_dit"] = ["DiTPipeline"]
+
+import sys
+
+
+sys.modules[__name__] = _LazyModule(
+ __name__,
+ globals()["__file__"],
+ _import_structure,
+ module_spec=__spec__,
+)
diff --git a/src/diffusers/pipelines/dit/pipeline_dit.py b/src/diffusers/pipelines/dit/pipeline_dit.py
index d57f13c2991a..5f5b0b199168 100644
--- a/src/diffusers/pipelines/dit/pipeline_dit.py
+++ b/src/diffusers/pipelines/dit/pipeline_dit.py
@@ -24,7 +24,7 @@
from ...models import AutoencoderKL, Transformer2DModel
from ...schedulers import KarrasDiffusionSchedulers
-from ...utils import randn_tensor
+from ...utils.torch_utils import randn_tensor
from ..pipeline_utils import DiffusionPipeline, ImagePipelineOutput
diff --git a/src/diffusers/pipelines/kandinsky/__init__.py b/src/diffusers/pipelines/kandinsky/__init__.py
index 946d31649018..cc4580721eff 100644
--- a/src/diffusers/pipelines/kandinsky/__init__.py
+++ b/src/diffusers/pipelines/kandinsky/__init__.py
@@ -1,23 +1,46 @@
from ...utils import (
OptionalDependencyNotAvailable,
+ _LazyModule,
+ get_objects_from_module,
is_torch_available,
is_transformers_available,
)
+_import_structure = {}
+_dummy_objects = {}
+
+
try:
if not (is_transformers_available() and is_torch_available()):
raise OptionalDependencyNotAvailable()
except OptionalDependencyNotAvailable:
- from ...utils.dummy_torch_and_transformers_objects import *
+ from ...utils import dummy_torch_and_transformers_objects # noqa F403
+
+ _dummy_objects.update(get_objects_from_module(dummy_torch_and_transformers_objects))
+
else:
- from .pipeline_kandinsky import KandinskyPipeline
- from .pipeline_kandinsky_combined import (
- KandinskyCombinedPipeline,
- KandinskyImg2ImgCombinedPipeline,
- KandinskyInpaintCombinedPipeline,
- )
- from .pipeline_kandinsky_img2img import KandinskyImg2ImgPipeline
- from .pipeline_kandinsky_inpaint import KandinskyInpaintPipeline
- from .pipeline_kandinsky_prior import KandinskyPriorPipeline, KandinskyPriorPipelineOutput
- from .text_encoder import MultilingualCLIP
+ _import_structure["pipeline_kandinsky"] = ["KandinskyPipeline"]
+ _import_structure["pipeline_kandinsky_combined"] = [
+ "KandinskyCombinedPipeline",
+ "KandinskyImg2ImgCombinedPipeline",
+ "KandinskyInpaintCombinedPipeline",
+ ]
+ _import_structure["pipeline_kandinsky_img2img"] = ["KandinskyImg2ImgPipeline"]
+ _import_structure["pipeline_kandinsky_inpaint"] = ["KandinskyInpaintPipeline"]
+ _import_structure["pipeline_kandinsky_prior"] = ["KandinskyPriorPipeline", "KandinskyPriorPipelineOutput"]
+ _import_structure["text_encoder"] = ["MultilingualCLIP"]
+
+
+import sys
+
+
+sys.modules[__name__] = _LazyModule(
+ __name__,
+ globals()["__file__"],
+ _import_structure,
+ module_spec=__spec__,
+)
+
+for name, value in _dummy_objects.items():
+ setattr(sys.modules[__name__], name, value)
diff --git a/src/diffusers/pipelines/kandinsky/pipeline_kandinsky.py b/src/diffusers/pipelines/kandinsky/pipeline_kandinsky.py
index 89afa0060ef8..8545b8b42ff0 100644
--- a/src/diffusers/pipelines/kandinsky/pipeline_kandinsky.py
+++ b/src/diffusers/pipelines/kandinsky/pipeline_kandinsky.py
@@ -25,9 +25,9 @@
is_accelerate_available,
is_accelerate_version,
logging,
- randn_tensor,
replace_example_docstring,
)
+from ...utils.torch_utils import randn_tensor
from ..pipeline_utils import DiffusionPipeline, ImagePipelineOutput
from .text_encoder import MultilingualCLIP
diff --git a/src/diffusers/pipelines/kandinsky/pipeline_kandinsky_img2img.py b/src/diffusers/pipelines/kandinsky/pipeline_kandinsky_img2img.py
index 5673d306aa0c..5013203049a1 100644
--- a/src/diffusers/pipelines/kandinsky/pipeline_kandinsky_img2img.py
+++ b/src/diffusers/pipelines/kandinsky/pipeline_kandinsky_img2img.py
@@ -28,9 +28,9 @@
is_accelerate_available,
is_accelerate_version,
logging,
- randn_tensor,
replace_example_docstring,
)
+from ...utils.torch_utils import randn_tensor
from ..pipeline_utils import DiffusionPipeline, ImagePipelineOutput
from .text_encoder import MultilingualCLIP
diff --git a/src/diffusers/pipelines/kandinsky/pipeline_kandinsky_inpaint.py b/src/diffusers/pipelines/kandinsky/pipeline_kandinsky_inpaint.py
index dda0c3faa7fd..4a920b5c3262 100644
--- a/src/diffusers/pipelines/kandinsky/pipeline_kandinsky_inpaint.py
+++ b/src/diffusers/pipelines/kandinsky/pipeline_kandinsky_inpaint.py
@@ -32,9 +32,9 @@
is_accelerate_available,
is_accelerate_version,
logging,
- randn_tensor,
replace_example_docstring,
)
+from ...utils.torch_utils import randn_tensor
from ..pipeline_utils import DiffusionPipeline, ImagePipelineOutput
from .text_encoder import MultilingualCLIP
diff --git a/src/diffusers/pipelines/kandinsky/pipeline_kandinsky_prior.py b/src/diffusers/pipelines/kandinsky/pipeline_kandinsky_prior.py
index 57d8c7beb97a..b6c031feac29 100644
--- a/src/diffusers/pipelines/kandinsky/pipeline_kandinsky_prior.py
+++ b/src/diffusers/pipelines/kandinsky/pipeline_kandinsky_prior.py
@@ -27,9 +27,9 @@
is_accelerate_available,
is_accelerate_version,
logging,
- randn_tensor,
replace_example_docstring,
)
+from ...utils.torch_utils import randn_tensor
from ..pipeline_utils import DiffusionPipeline
diff --git a/src/diffusers/pipelines/kandinsky2_2/__init__.py b/src/diffusers/pipelines/kandinsky2_2/__init__.py
index 4997a2e4056b..639d6ad977c2 100644
--- a/src/diffusers/pipelines/kandinsky2_2/__init__.py
+++ b/src/diffusers/pipelines/kandinsky2_2/__init__.py
@@ -1,25 +1,48 @@
from ...utils import (
OptionalDependencyNotAvailable,
+ _LazyModule,
+ get_objects_from_module,
is_torch_available,
is_transformers_available,
)
+_import_structure = {}
+_dummy_objects = {}
+
+
try:
if not (is_transformers_available() and is_torch_available()):
raise OptionalDependencyNotAvailable()
except OptionalDependencyNotAvailable:
- from ...utils.dummy_torch_and_transformers_objects import *
+ from ...utils import dummy_torch_and_transformers_objects # noqa F403
+
+ _dummy_objects.update(get_objects_from_module(dummy_torch_and_transformers_objects))
+
else:
- from .pipeline_kandinsky2_2 import KandinskyV22Pipeline
- from .pipeline_kandinsky2_2_combined import (
- KandinskyV22CombinedPipeline,
- KandinskyV22Img2ImgCombinedPipeline,
- KandinskyV22InpaintCombinedPipeline,
- )
- from .pipeline_kandinsky2_2_controlnet import KandinskyV22ControlnetPipeline
- from .pipeline_kandinsky2_2_controlnet_img2img import KandinskyV22ControlnetImg2ImgPipeline
- from .pipeline_kandinsky2_2_img2img import KandinskyV22Img2ImgPipeline
- from .pipeline_kandinsky2_2_inpainting import KandinskyV22InpaintPipeline
- from .pipeline_kandinsky2_2_prior import KandinskyV22PriorPipeline
- from .pipeline_kandinsky2_2_prior_emb2emb import KandinskyV22PriorEmb2EmbPipeline
+ _import_structure["pipeline_kandinsky2_2"] = ["KandinskyV22Pipeline"]
+ _import_structure["pipeline_kandinsky2_2_combined"] = [
+ "KandinskyV22CombinedPipeline",
+ "KandinskyV22Img2ImgCombinedPipeline",
+ "KandinskyV22InpaintCombinedPipeline",
+ ]
+ _import_structure["pipeline_kandinsky2_2_controlnet"] = ["KandinskyV22ControlnetPipeline"]
+ _import_structure["pipeline_kandinsky2_2_controlnet_img2img"] = ["KandinskyV22ControlnetImg2ImgPipeline"]
+ _import_structure["pipeline_kandinsky2_2_img2img"] = ["KandinskyV22Img2ImgPipeline"]
+ _import_structure["pipeline_kandinsky2_2_inpainting"] = ["KandinskyV22InpaintPipeline"]
+ _import_structure["pipeline_kandinsky2_2_prior"] = ["KandinskyV22PriorPipeline"]
+ _import_structure["pipeline_kandinsky2_2_prior_emb2emb"] = ["KandinskyV22PriorEmb2EmbPipeline"]
+
+
+import sys
+
+
+sys.modules[__name__] = _LazyModule(
+ __name__,
+ globals()["__file__"],
+ _import_structure,
+ module_spec=__spec__,
+)
+
+for name, value in _dummy_objects.items():
+ setattr(sys.modules[__name__], name, value)
diff --git a/src/diffusers/pipelines/kandinsky2_2/pipeline_kandinsky2_2.py b/src/diffusers/pipelines/kandinsky2_2/pipeline_kandinsky2_2.py
index ccbdae09dc08..2ff2d8b004ab 100644
--- a/src/diffusers/pipelines/kandinsky2_2/pipeline_kandinsky2_2.py
+++ b/src/diffusers/pipelines/kandinsky2_2/pipeline_kandinsky2_2.py
@@ -22,9 +22,9 @@
is_accelerate_available,
is_accelerate_version,
logging,
- randn_tensor,
replace_example_docstring,
)
+from ...utils.torch_utils import randn_tensor
from ..pipeline_utils import DiffusionPipeline, ImagePipelineOutput
diff --git a/src/diffusers/pipelines/kandinsky2_2/pipeline_kandinsky2_2_controlnet.py b/src/diffusers/pipelines/kandinsky2_2/pipeline_kandinsky2_2_controlnet.py
index 22b3eaf0915e..ec82f4516042 100644
--- a/src/diffusers/pipelines/kandinsky2_2/pipeline_kandinsky2_2_controlnet.py
+++ b/src/diffusers/pipelines/kandinsky2_2/pipeline_kandinsky2_2_controlnet.py
@@ -22,9 +22,9 @@
is_accelerate_available,
is_accelerate_version,
logging,
- randn_tensor,
replace_example_docstring,
)
+from ...utils.torch_utils import randn_tensor
from ..pipeline_utils import DiffusionPipeline, ImagePipelineOutput
diff --git a/src/diffusers/pipelines/kandinsky2_2/pipeline_kandinsky2_2_controlnet_img2img.py b/src/diffusers/pipelines/kandinsky2_2/pipeline_kandinsky2_2_controlnet_img2img.py
index 1b3328faaf97..8a2deb52fbce 100644
--- a/src/diffusers/pipelines/kandinsky2_2/pipeline_kandinsky2_2_controlnet_img2img.py
+++ b/src/diffusers/pipelines/kandinsky2_2/pipeline_kandinsky2_2_controlnet_img2img.py
@@ -25,9 +25,9 @@
is_accelerate_available,
is_accelerate_version,
logging,
- randn_tensor,
replace_example_docstring,
)
+from ...utils.torch_utils import randn_tensor
from ..pipeline_utils import DiffusionPipeline, ImagePipelineOutput
diff --git a/src/diffusers/pipelines/kandinsky2_2/pipeline_kandinsky2_2_img2img.py b/src/diffusers/pipelines/kandinsky2_2/pipeline_kandinsky2_2_img2img.py
index 82e609ce7cd1..9b0f576fa7d0 100644
--- a/src/diffusers/pipelines/kandinsky2_2/pipeline_kandinsky2_2_img2img.py
+++ b/src/diffusers/pipelines/kandinsky2_2/pipeline_kandinsky2_2_img2img.py
@@ -25,9 +25,9 @@
is_accelerate_available,
is_accelerate_version,
logging,
- randn_tensor,
replace_example_docstring,
)
+from ...utils.torch_utils import randn_tensor
from ..pipeline_utils import DiffusionPipeline, ImagePipelineOutput
diff --git a/src/diffusers/pipelines/kandinsky2_2/pipeline_kandinsky2_2_inpainting.py b/src/diffusers/pipelines/kandinsky2_2/pipeline_kandinsky2_2_inpainting.py
index 2e0a0d833740..7320a62ef6e0 100644
--- a/src/diffusers/pipelines/kandinsky2_2/pipeline_kandinsky2_2_inpainting.py
+++ b/src/diffusers/pipelines/kandinsky2_2/pipeline_kandinsky2_2_inpainting.py
@@ -29,9 +29,9 @@
is_accelerate_available,
is_accelerate_version,
logging,
- randn_tensor,
replace_example_docstring,
)
+from ...utils.torch_utils import randn_tensor
from ..pipeline_utils import DiffusionPipeline, ImagePipelineOutput
diff --git a/src/diffusers/pipelines/kandinsky2_2/pipeline_kandinsky2_2_prior.py b/src/diffusers/pipelines/kandinsky2_2/pipeline_kandinsky2_2_prior.py
index 3cf33b563145..943363dc7795 100644
--- a/src/diffusers/pipelines/kandinsky2_2/pipeline_kandinsky2_2_prior.py
+++ b/src/diffusers/pipelines/kandinsky2_2/pipeline_kandinsky2_2_prior.py
@@ -10,9 +10,9 @@
is_accelerate_available,
is_accelerate_version,
logging,
- randn_tensor,
replace_example_docstring,
)
+from ...utils.torch_utils import randn_tensor
from ..kandinsky import KandinskyPriorPipelineOutput
from ..pipeline_utils import DiffusionPipeline
diff --git a/src/diffusers/pipelines/kandinsky2_2/pipeline_kandinsky2_2_prior_emb2emb.py b/src/diffusers/pipelines/kandinsky2_2/pipeline_kandinsky2_2_prior_emb2emb.py
index 75e1644f6186..f17f463b9bfe 100644
--- a/src/diffusers/pipelines/kandinsky2_2/pipeline_kandinsky2_2_prior_emb2emb.py
+++ b/src/diffusers/pipelines/kandinsky2_2/pipeline_kandinsky2_2_prior_emb2emb.py
@@ -10,9 +10,9 @@
is_accelerate_available,
is_accelerate_version,
logging,
- randn_tensor,
replace_example_docstring,
)
+from ...utils.torch_utils import randn_tensor
from ..kandinsky import KandinskyPriorPipelineOutput
from ..pipeline_utils import DiffusionPipeline
diff --git a/src/diffusers/pipelines/latent_diffusion/__init__.py b/src/diffusers/pipelines/latent_diffusion/__init__.py
index a6c16f598695..a78e6622bcfe 100644
--- a/src/diffusers/pipelines/latent_diffusion/__init__.py
+++ b/src/diffusers/pipelines/latent_diffusion/__init__.py
@@ -1,11 +1,37 @@
-from ...utils import OptionalDependencyNotAvailable, is_torch_available, is_transformers_available
-from .pipeline_latent_diffusion_superresolution import LDMSuperResolutionPipeline
+from ...utils import (
+ OptionalDependencyNotAvailable,
+ _LazyModule,
+ get_objects_from_module,
+ is_torch_available,
+ is_transformers_available,
+)
+
+
+_import_structure = {}
+_dummy_objects = {}
try:
if not (is_transformers_available() and is_torch_available()):
raise OptionalDependencyNotAvailable()
except OptionalDependencyNotAvailable:
- from ...utils.dummy_torch_and_transformers_objects import ShapEPipeline
+ from ...utils import dummy_torch_and_transformers_objects # noqa F403
+
+ _dummy_objects.update(get_objects_from_module(dummy_torch_and_transformers_objects))
else:
- from .pipeline_latent_diffusion import LDMBertModel, LDMTextToImagePipeline
+ _import_structure["pipeline_latent_diffusion"] = ["LDMBertModel", "LDMTextToImagePipeline"]
+ _import_structure["pipeline_latent_diffusion_superresolution"] = ["LDMSuperResolutionPipeline"]
+
+
+import sys
+
+
+sys.modules[__name__] = _LazyModule(
+ __name__,
+ globals()["__file__"],
+ _import_structure,
+ module_spec=__spec__,
+)
+
+for name, value in _dummy_objects.items():
+ setattr(sys.modules[__name__], name, value)
diff --git a/src/diffusers/pipelines/latent_diffusion/pipeline_latent_diffusion.py b/src/diffusers/pipelines/latent_diffusion/pipeline_latent_diffusion.py
index e86f7b985e47..4b4315a421e8 100644
--- a/src/diffusers/pipelines/latent_diffusion/pipeline_latent_diffusion.py
+++ b/src/diffusers/pipelines/latent_diffusion/pipeline_latent_diffusion.py
@@ -25,7 +25,7 @@
from ...models import AutoencoderKL, UNet2DConditionModel, UNet2DModel, VQModel
from ...schedulers import DDIMScheduler, LMSDiscreteScheduler, PNDMScheduler
-from ...utils import randn_tensor
+from ...utils.torch_utils import randn_tensor
from ..pipeline_utils import DiffusionPipeline, ImagePipelineOutput
diff --git a/src/diffusers/pipelines/latent_diffusion/pipeline_latent_diffusion_superresolution.py b/src/diffusers/pipelines/latent_diffusion/pipeline_latent_diffusion_superresolution.py
index c8d5c1a1891d..def1183abc9e 100644
--- a/src/diffusers/pipelines/latent_diffusion/pipeline_latent_diffusion_superresolution.py
+++ b/src/diffusers/pipelines/latent_diffusion/pipeline_latent_diffusion_superresolution.py
@@ -15,7 +15,8 @@
LMSDiscreteScheduler,
PNDMScheduler,
)
-from ...utils import PIL_INTERPOLATION, randn_tensor
+from ...utils import PIL_INTERPOLATION
+from ...utils.torch_utils import randn_tensor
from ..pipeline_utils import DiffusionPipeline, ImagePipelineOutput
diff --git a/src/diffusers/pipelines/latent_diffusion_uncond/__init__.py b/src/diffusers/pipelines/latent_diffusion_uncond/__init__.py
index 1b9fc5270a62..73e5c703f61a 100644
--- a/src/diffusers/pipelines/latent_diffusion_uncond/__init__.py
+++ b/src/diffusers/pipelines/latent_diffusion_uncond/__init__.py
@@ -1 +1,15 @@
-from .pipeline_latent_diffusion_uncond import LDMPipeline
+from ...utils import _LazyModule
+
+
+_import_structure = {}
+_import_structure["pipeline_latent_diffusion_uncond"] = ["LDMPipeline"]
+
+import sys
+
+
+sys.modules[__name__] = _LazyModule(
+ __name__,
+ globals()["__file__"],
+ _import_structure,
+ module_spec=__spec__,
+)
diff --git a/src/diffusers/pipelines/latent_diffusion_uncond/pipeline_latent_diffusion_uncond.py b/src/diffusers/pipelines/latent_diffusion_uncond/pipeline_latent_diffusion_uncond.py
index be130a74c28c..f3638eee86fc 100644
--- a/src/diffusers/pipelines/latent_diffusion_uncond/pipeline_latent_diffusion_uncond.py
+++ b/src/diffusers/pipelines/latent_diffusion_uncond/pipeline_latent_diffusion_uncond.py
@@ -19,7 +19,7 @@
from ...models import UNet2DModel, VQModel
from ...schedulers import DDIMScheduler
-from ...utils import randn_tensor
+from ...utils.torch_utils import randn_tensor
from ..pipeline_utils import DiffusionPipeline, ImagePipelineOutput
diff --git a/src/diffusers/pipelines/musicldm/__init__.py b/src/diffusers/pipelines/musicldm/__init__.py
index b82f429798e7..6228f763a53b 100644
--- a/src/diffusers/pipelines/musicldm/__init__.py
+++ b/src/diffusers/pipelines/musicldm/__init__.py
@@ -1,17 +1,36 @@
from ...utils import (
OptionalDependencyNotAvailable,
+ _LazyModule,
+ get_objects_from_module,
is_torch_available,
is_transformers_available,
is_transformers_version,
)
+_import_structure = {}
+_dummy_objects = {}
+
try:
if not (is_transformers_available() and is_torch_available() and is_transformers_version(">=", "4.27.0")):
raise OptionalDependencyNotAvailable()
except OptionalDependencyNotAvailable:
- from ...utils.dummy_torch_and_transformers_objects import (
- MusicLDMPipeline,
- )
+ from ...utils import dummy_torch_and_transformers_objects # noqa F403
+
+ _dummy_objects.update(get_objects_from_module(dummy_torch_and_transformers_objects))
+
else:
- from .pipeline_musicldm import MusicLDMPipeline
+ _import_structure["pipeline_musicldm"] = ["MusicLDMPipeline"]
+
+import sys
+
+
+sys.modules[__name__] = _LazyModule(
+ __name__,
+ globals()["__file__"],
+ _import_structure,
+ module_spec=__spec__,
+)
+
+for name, value in _dummy_objects.items():
+ setattr(sys.modules[__name__], name, value)
diff --git a/src/diffusers/pipelines/musicldm/pipeline_musicldm.py b/src/diffusers/pipelines/musicldm/pipeline_musicldm.py
index 802de432e1c0..a891099f1aac 100644
--- a/src/diffusers/pipelines/musicldm/pipeline_musicldm.py
+++ b/src/diffusers/pipelines/musicldm/pipeline_musicldm.py
@@ -28,7 +28,8 @@
from ...models import AutoencoderKL, UNet2DConditionModel
from ...schedulers import KarrasDiffusionSchedulers
-from ...utils import is_librosa_available, logging, randn_tensor, replace_example_docstring
+from ...utils import is_librosa_available, logging, replace_example_docstring
+from ...utils.torch_utils import randn_tensor
from ..pipeline_utils import AudioPipelineOutput, DiffusionPipeline
diff --git a/src/diffusers/pipelines/paint_by_example/__init__.py b/src/diffusers/pipelines/paint_by_example/__init__.py
index 9d3ce86531ee..c19ce1036e3f 100644
--- a/src/diffusers/pipelines/paint_by_example/__init__.py
+++ b/src/diffusers/pipelines/paint_by_example/__init__.py
@@ -5,14 +5,38 @@
import PIL
from PIL import Image
-from ...utils import OptionalDependencyNotAvailable, is_torch_available, is_transformers_available
+from ...utils import (
+ OptionalDependencyNotAvailable,
+ _LazyModule,
+ get_objects_from_module,
+ is_torch_available,
+ is_transformers_available,
+)
+_import_structure = {}
+_dummy_objects = {}
+
try:
if not (is_transformers_available() and is_torch_available()):
raise OptionalDependencyNotAvailable()
except OptionalDependencyNotAvailable:
- from ...utils.dummy_torch_and_transformers_objects import ShapEPipeline
+ from ...utils import dummy_torch_and_transformers_objects # noqa F403
+
+ _dummy_objects.update(get_objects_from_module(dummy_torch_and_transformers_objects))
else:
- from .image_encoder import PaintByExampleImageEncoder
- from .pipeline_paint_by_example import PaintByExamplePipeline
+ _import_structure["image_encoder"] = ["PaintByExampleImageEncoder"]
+ _import_structure["pipeline_paint_by_example"] = ["PaintByExamplePipeline"]
+
+import sys
+
+
+sys.modules[__name__] = _LazyModule(
+ __name__,
+ globals()["__file__"],
+ _import_structure,
+ module_spec=__spec__,
+)
+
+for name, value in _dummy_objects.items():
+ setattr(sys.modules[__name__], name, value)
diff --git a/src/diffusers/pipelines/paint_by_example/pipeline_paint_by_example.py b/src/diffusers/pipelines/paint_by_example/pipeline_paint_by_example.py
index a0e0f9f6d624..383edae08e8f 100644
--- a/src/diffusers/pipelines/paint_by_example/pipeline_paint_by_example.py
+++ b/src/diffusers/pipelines/paint_by_example/pipeline_paint_by_example.py
@@ -23,7 +23,8 @@
from ...image_processor import VaeImageProcessor
from ...models import AutoencoderKL, UNet2DConditionModel
from ...schedulers import DDIMScheduler, LMSDiscreteScheduler, PNDMScheduler
-from ...utils import deprecate, logging, randn_tensor
+from ...utils import deprecate, logging
+from ...utils.torch_utils import randn_tensor
from ..pipeline_utils import DiffusionPipeline
from ..stable_diffusion import StableDiffusionPipelineOutput
from ..stable_diffusion.safety_checker import StableDiffusionSafetyChecker
diff --git a/src/diffusers/pipelines/pipeline_utils.py b/src/diffusers/pipelines/pipeline_utils.py
index 110c97acdcdf..fb120ebc7d3b 100644
--- a/src/diffusers/pipelines/pipeline_utils.py
+++ b/src/diffusers/pipelines/pipeline_utils.py
@@ -51,12 +51,12 @@
get_class_from_dynamic_module,
is_accelerate_available,
is_accelerate_version,
- is_compiled_module,
is_torch_version,
is_transformers_available,
logging,
numpy_to_pil,
)
+from ..utils.torch_utils import is_compiled_module
if is_transformers_available():
diff --git a/src/diffusers/pipelines/pndm/__init__.py b/src/diffusers/pipelines/pndm/__init__.py
index 488eb4f5f2b2..7374016c32d9 100644
--- a/src/diffusers/pipelines/pndm/__init__.py
+++ b/src/diffusers/pipelines/pndm/__init__.py
@@ -1 +1,16 @@
-from .pipeline_pndm import PNDMPipeline
+from ...utils import _LazyModule
+
+
+_import_structure = {}
+_import_structure["pipeline_pndm"] = ["PNDMPipeline"]
+
+
+import sys
+
+
+sys.modules[__name__] = _LazyModule(
+ __name__,
+ globals()["__file__"],
+ _import_structure,
+ module_spec=__spec__,
+)
diff --git a/src/diffusers/pipelines/pndm/pipeline_pndm.py b/src/diffusers/pipelines/pndm/pipeline_pndm.py
index 4add91fd1a69..78690997223a 100644
--- a/src/diffusers/pipelines/pndm/pipeline_pndm.py
+++ b/src/diffusers/pipelines/pndm/pipeline_pndm.py
@@ -19,7 +19,7 @@
from ...models import UNet2DModel
from ...schedulers import PNDMScheduler
-from ...utils import randn_tensor
+from ...utils.torch_utils import randn_tensor
from ..pipeline_utils import DiffusionPipeline, ImagePipelineOutput
diff --git a/src/diffusers/pipelines/repaint/__init__.py b/src/diffusers/pipelines/repaint/__init__.py
index 16bc86d1cedf..2a0eedf30bbf 100644
--- a/src/diffusers/pipelines/repaint/__init__.py
+++ b/src/diffusers/pipelines/repaint/__init__.py
@@ -1 +1,15 @@
-from .pipeline_repaint import RePaintPipeline
+from ...utils import _LazyModule
+
+
+_import_structure = {}
+_import_structure["pipeline_repaint"] = ["RePaintPipeline"]
+
+import sys
+
+
+sys.modules[__name__] = _LazyModule(
+ __name__,
+ globals()["__file__"],
+ _import_structure,
+ module_spec=__spec__,
+)
diff --git a/src/diffusers/pipelines/repaint/pipeline_repaint.py b/src/diffusers/pipelines/repaint/pipeline_repaint.py
index 398a50cf5e25..5372c2431d52 100644
--- a/src/diffusers/pipelines/repaint/pipeline_repaint.py
+++ b/src/diffusers/pipelines/repaint/pipeline_repaint.py
@@ -21,7 +21,8 @@
from ...models import UNet2DModel
from ...schedulers import RePaintScheduler
-from ...utils import PIL_INTERPOLATION, deprecate, logging, randn_tensor
+from ...utils import PIL_INTERPOLATION, deprecate, logging
+from ...utils.torch_utils import randn_tensor
from ..pipeline_utils import DiffusionPipeline, ImagePipelineOutput
diff --git a/src/diffusers/pipelines/score_sde_ve/__init__.py b/src/diffusers/pipelines/score_sde_ve/__init__.py
index c7c2a85c067b..2cd7ac2bf440 100644
--- a/src/diffusers/pipelines/score_sde_ve/__init__.py
+++ b/src/diffusers/pipelines/score_sde_ve/__init__.py
@@ -1 +1,15 @@
-from .pipeline_score_sde_ve import ScoreSdeVePipeline
+from ...utils import _LazyModule
+
+
+_import_structure = {}
+_import_structure["pipeline_score_sde_ve"] = ["ScoreSdeVePipeline"]
+
+import sys
+
+
+sys.modules[__name__] = _LazyModule(
+ __name__,
+ globals()["__file__"],
+ _import_structure,
+ module_spec=__spec__,
+)
diff --git a/src/diffusers/pipelines/score_sde_ve/pipeline_score_sde_ve.py b/src/diffusers/pipelines/score_sde_ve/pipeline_score_sde_ve.py
index ace4f0c60db8..eb98479b9b61 100644
--- a/src/diffusers/pipelines/score_sde_ve/pipeline_score_sde_ve.py
+++ b/src/diffusers/pipelines/score_sde_ve/pipeline_score_sde_ve.py
@@ -18,7 +18,7 @@
from ...models import UNet2DModel
from ...schedulers import ScoreSdeVeScheduler
-from ...utils import randn_tensor
+from ...utils.torch_utils import randn_tensor
from ..pipeline_utils import DiffusionPipeline, ImagePipelineOutput
diff --git a/src/diffusers/pipelines/semantic_stable_diffusion/__init__.py b/src/diffusers/pipelines/semantic_stable_diffusion/__init__.py
index 95d3604bcf09..1b743ac3d58d 100644
--- a/src/diffusers/pipelines/semantic_stable_diffusion/__init__.py
+++ b/src/diffusers/pipelines/semantic_stable_diffusion/__init__.py
@@ -1,36 +1,38 @@
-from dataclasses import dataclass
-from enum import Enum
-from typing import List, Optional, Union
+from ...utils import (
+ OptionalDependencyNotAvailable,
+ _LazyModule,
+ get_objects_from_module,
+ is_torch_available,
+ is_transformers_available,
+)
-import numpy as np
-import PIL
-from PIL import Image
-from ...utils import BaseOutput, OptionalDependencyNotAvailable, is_torch_available, is_transformers_available
-
-
-@dataclass
-class SemanticStableDiffusionPipelineOutput(BaseOutput):
- """
- Output class for Stable Diffusion pipelines.
-
- Args:
- images (`List[PIL.Image.Image]` or `np.ndarray`)
- List of denoised PIL images of length `batch_size` or NumPy array of shape `(batch_size, height, width,
- num_channels)`.
- nsfw_content_detected (`List[bool]`)
- List indicating whether the corresponding generated image contains โnot-safe-for-workโ (nsfw) content or
- `None` if safety checking could not be performed.
- """
-
- images: Union[List[PIL.Image.Image], np.ndarray]
- nsfw_content_detected: Optional[List[bool]]
+_import_structure = {}
+_dummy_objects = {}
try:
if not (is_transformers_available() and is_torch_available()):
raise OptionalDependencyNotAvailable()
except OptionalDependencyNotAvailable:
- from ...utils.dummy_torch_and_transformers_objects import * # noqa F403
+ from ...utils import dummy_torch_and_transformers_objects # noqa F403
+
+ _dummy_objects.update(get_objects_from_module(dummy_torch_and_transformers_objects))
+
else:
- from .pipeline_semantic_stable_diffusion import SemanticStableDiffusionPipeline
+ _import_structure["pipeline_output"] = ["SemanticStableDiffusionPipelineOutput"]
+ _import_structure["pipeline_semantic_stable_diffusion"] = ["SemanticStableDiffusionPipeline"]
+
+
+import sys
+
+
+sys.modules[__name__] = _LazyModule(
+ __name__,
+ globals()["__file__"],
+ _import_structure,
+ module_spec=__spec__,
+)
+
+for name, value in _dummy_objects.items():
+ setattr(sys.modules[__name__], name, value)
diff --git a/src/diffusers/pipelines/semantic_stable_diffusion/pipeline_output.py b/src/diffusers/pipelines/semantic_stable_diffusion/pipeline_output.py
new file mode 100644
index 000000000000..172715da864e
--- /dev/null
+++ b/src/diffusers/pipelines/semantic_stable_diffusion/pipeline_output.py
@@ -0,0 +1,25 @@
+from dataclasses import dataclass
+from typing import List, Optional, Union
+
+import numpy as np
+import PIL
+
+from ...utils import BaseOutput
+
+
+@dataclass
+class SemanticStableDiffusionPipelineOutput(BaseOutput):
+ """
+ Output class for Stable Diffusion pipelines.
+
+ Args:
+ images (`List[PIL.Image.Image]` or `np.ndarray`)
+ List of denoised PIL images of length `batch_size` or NumPy array of shape `(batch_size, height, width,
+ num_channels)`.
+ nsfw_content_detected (`List[bool]`)
+ List indicating whether the corresponding generated image contains โnot-safe-for-workโ (nsfw) content or
+ `None` if safety checking could not be performed.
+ """
+
+ images: Union[List[PIL.Image.Image], np.ndarray]
+ nsfw_content_detected: Optional[List[bool]]
diff --git a/src/diffusers/pipelines/semantic_stable_diffusion/pipeline_semantic_stable_diffusion.py b/src/diffusers/pipelines/semantic_stable_diffusion/pipeline_semantic_stable_diffusion.py
index b9ad42c4722d..c27b03968ec1 100644
--- a/src/diffusers/pipelines/semantic_stable_diffusion/pipeline_semantic_stable_diffusion.py
+++ b/src/diffusers/pipelines/semantic_stable_diffusion/pipeline_semantic_stable_diffusion.py
@@ -9,7 +9,8 @@
from ...models import AutoencoderKL, UNet2DConditionModel
from ...pipelines.stable_diffusion.safety_checker import StableDiffusionSafetyChecker
from ...schedulers import KarrasDiffusionSchedulers
-from ...utils import deprecate, logging, randn_tensor
+from ...utils import deprecate, logging
+from ...utils.torch_utils import randn_tensor
from ..pipeline_utils import DiffusionPipeline
from . import SemanticStableDiffusionPipelineOutput
diff --git a/src/diffusers/pipelines/shap_e/__init__.py b/src/diffusers/pipelines/shap_e/__init__.py
index 04aa1f2f6d78..2a56148fee91 100644
--- a/src/diffusers/pipelines/shap_e/__init__.py
+++ b/src/diffusers/pipelines/shap_e/__init__.py
@@ -1,27 +1,47 @@
from ...utils import (
OptionalDependencyNotAvailable,
+ _LazyModule,
+ get_objects_from_module,
is_torch_available,
is_transformers_available,
- is_transformers_version,
)
+_import_structure = {}
+_dummy_objects = {}
+
+
try:
if not (is_transformers_available() and is_torch_available()):
raise OptionalDependencyNotAvailable()
except OptionalDependencyNotAvailable:
- from ...utils.dummy_torch_and_transformers_objects import ShapEPipeline
+ from ...utils import dummy_torch_and_transformers_objects # noqa F403
+
+ _dummy_objects.update(get_objects_from_module(dummy_torch_and_transformers_objects))
else:
- from .camera import create_pan_cameras
- from .pipeline_shap_e import ShapEPipeline
- from .pipeline_shap_e_img2img import ShapEImg2ImgPipeline
- from .renderer import (
- BoundingBoxVolume,
- ImportanceRaySampler,
- MLPNeRFModelOutput,
- MLPNeRSTFModel,
- ShapEParamsProjModel,
- ShapERenderer,
- StratifiedRaySampler,
- VoidNeRFModel,
- )
+ _import_structure["camera"] = ["create_pan_cameras"]
+ _import_structure["pipeline_shap_e"] = ["ShapEPipeline"]
+ _import_structure["pipeline_shap_e_img2img"] = ["ShapEImg2ImgPipeline"]
+ _import_structure["renderer"] = [
+ "BoundingBoxVolume",
+ "ImportanceRaySampler",
+ "MLPNeRFModelOutput",
+ "MLPNeRSTFModel",
+ "ShapEParamsProjModel",
+ "ShapERenderer",
+ "StratifiedRaySampler",
+ "VoidNeRFModel",
+ ]
+
+import sys
+
+
+sys.modules[__name__] = _LazyModule(
+ __name__,
+ globals()["__file__"],
+ _import_structure,
+ module_spec=__spec__,
+)
+
+for name, value in _dummy_objects.items():
+ setattr(sys.modules[__name__], name, value)
diff --git a/src/diffusers/pipelines/shap_e/pipeline_shap_e.py b/src/diffusers/pipelines/shap_e/pipeline_shap_e.py
index 266075d93b30..7a6cd4589a0a 100644
--- a/src/diffusers/pipelines/shap_e/pipeline_shap_e.py
+++ b/src/diffusers/pipelines/shap_e/pipeline_shap_e.py
@@ -28,9 +28,9 @@
is_accelerate_available,
is_accelerate_version,
logging,
- randn_tensor,
replace_example_docstring,
)
+from ...utils.torch_utils import randn_tensor
from ..pipeline_utils import DiffusionPipeline
from .renderer import ShapERenderer
diff --git a/src/diffusers/pipelines/shap_e/pipeline_shap_e_img2img.py b/src/diffusers/pipelines/shap_e/pipeline_shap_e_img2img.py
index 6aa75ca0d541..a8ef7aa09027 100644
--- a/src/diffusers/pipelines/shap_e/pipeline_shap_e_img2img.py
+++ b/src/diffusers/pipelines/shap_e/pipeline_shap_e_img2img.py
@@ -25,9 +25,9 @@
from ...utils import (
BaseOutput,
logging,
- randn_tensor,
replace_example_docstring,
)
+from ...utils.torch_utils import randn_tensor
from ..pipeline_utils import DiffusionPipeline
from .renderer import ShapERenderer
diff --git a/src/diffusers/pipelines/spectrogram_diffusion/__init__.py b/src/diffusers/pipelines/spectrogram_diffusion/__init__.py
index 05b14a857630..e8bcf63c2986 100644
--- a/src/diffusers/pipelines/spectrogram_diffusion/__init__.py
+++ b/src/diffusers/pipelines/spectrogram_diffusion/__init__.py
@@ -1,21 +1,33 @@
# flake8: noqa
-from ...utils import is_note_seq_available, is_transformers_available, is_torch_available
-from ...utils import OptionalDependencyNotAvailable
+from ...utils import (
+ _LazyModule,
+ is_note_seq_available,
+ OptionalDependencyNotAvailable,
+ is_torch_available,
+ is_transformers_available,
+ get_objects_from_module,
+)
+
+_import_structure = {}
+_dummy_objects = {}
try:
if not (is_transformers_available() and is_torch_available()):
raise OptionalDependencyNotAvailable()
except OptionalDependencyNotAvailable:
- from ...utils.dummy_torch_and_transformers_objects import * # noqa F403
+ from ...utils import dummy_torch_and_transformers_objects # noqa F403
+
+ _dummy_objects.update(get_objects_from_module(dummy_torch_and_transformers_objects))
+
else:
- from .notes_encoder import SpectrogramNotesEncoder
- from .continous_encoder import SpectrogramContEncoder
- from .pipeline_spectrogram_diffusion import (
- SpectrogramContEncoder,
- SpectrogramDiffusionPipeline,
- T5FilmDecoder,
- )
+ _import_structure["notes_encoder"] = ["SpectrogramNotesEncoder"]
+ _import_structure["continous_encoder"] = ["SpectrogramContEncoder"]
+ _import_structure["pipeline_spectrogram_diffusion"] = [
+ "SpectrogramContEncoder",
+ "SpectrogramDiffusionPipeline",
+ "T5FilmDecoder",
+ ]
try:
if not (is_transformers_available() and is_torch_available() and is_note_seq_available()):
@@ -23,4 +35,16 @@
except OptionalDependencyNotAvailable:
from ...utils.dummy_transformers_and_torch_and_note_seq_objects import * # noqa F403
else:
- from .midi_utils import MidiProcessor
+ _import_structure["midi_utils"] = ["MidiProcessor"]
+
+import sys
+
+sys.modules[__name__] = _LazyModule(
+ __name__,
+ globals()["__file__"],
+ _import_structure,
+ module_spec=__spec__,
+)
+
+for name, value in _dummy_objects.items():
+ setattr(sys.modules[__name__], name, value)
diff --git a/src/diffusers/pipelines/spectrogram_diffusion/pipeline_spectrogram_diffusion.py b/src/diffusers/pipelines/spectrogram_diffusion/pipeline_spectrogram_diffusion.py
index bb3922e77fd1..5ab503df49ca 100644
--- a/src/diffusers/pipelines/spectrogram_diffusion/pipeline_spectrogram_diffusion.py
+++ b/src/diffusers/pipelines/spectrogram_diffusion/pipeline_spectrogram_diffusion.py
@@ -21,7 +21,8 @@
from ...models import T5FilmDecoder
from ...schedulers import DDPMScheduler
-from ...utils import is_onnx_available, logging, randn_tensor
+from ...utils import is_onnx_available, logging
+from ...utils.torch_utils import randn_tensor
if is_onnx_available():
diff --git a/src/diffusers/pipelines/stable_diffusion/__init__.py b/src/diffusers/pipelines/stable_diffusion/__init__.py
index b92081434556..f6f3327c5fb6 100644
--- a/src/diffusers/pipelines/stable_diffusion/__init__.py
+++ b/src/diffusers/pipelines/stable_diffusion/__init__.py
@@ -1,13 +1,7 @@
-from dataclasses import dataclass
-from typing import List, Optional, Union
-
-import numpy as np
-import PIL
-from PIL import Image
-
from ...utils import (
- BaseOutput,
OptionalDependencyNotAvailable,
+ _LazyModule,
+ get_objects_from_module,
is_flax_available,
is_k_diffusion_available,
is_k_diffusion_version,
@@ -18,59 +12,56 @@
)
-@dataclass
-class StableDiffusionPipelineOutput(BaseOutput):
- """
- Output class for Stable Diffusion pipelines.
-
- Args:
- images (`List[PIL.Image.Image]` or `np.ndarray`)
- List of denoised PIL images of length `batch_size` or NumPy array of shape `(batch_size, height, width,
- num_channels)`.
- nsfw_content_detected (`List[bool]`)
- List indicating whether the corresponding generated image contains "not-safe-for-work" (nsfw) content or
- `None` if safety checking could not be performed.
- """
+_import_structure = {}
+_additional_imports = {}
+_dummy_objects = {}
- images: Union[List[PIL.Image.Image], np.ndarray]
- nsfw_content_detected: Optional[List[bool]]
+_import_structure["pipeline_output"] = ["StableDiffusionPipelineOutput"]
+if is_transformers_available() and is_flax_available():
+ _import_structure["pipeline_output"].extend(["FlaxStableDiffusionPipelineOutput"])
try:
if not (is_transformers_available() and is_torch_available()):
raise OptionalDependencyNotAvailable()
except OptionalDependencyNotAvailable:
- from ...utils.dummy_torch_and_transformers_objects import * # noqa F403
+ from ...utils import dummy_torch_and_transformers_objects # noqa F403
+
+ _dummy_objects.update(get_objects_from_module(dummy_torch_and_transformers_objects))
+
else:
- from .clip_image_project_model import CLIPImageProjection
- from .pipeline_cycle_diffusion import CycleDiffusionPipeline
- from .pipeline_stable_diffusion import StableDiffusionPipeline
- from .pipeline_stable_diffusion_attend_and_excite import StableDiffusionAttendAndExcitePipeline
- from .pipeline_stable_diffusion_gligen import StableDiffusionGLIGENPipeline
- from .pipeline_stable_diffusion_gligen_text_image import StableDiffusionGLIGENTextImagePipeline
- from .pipeline_stable_diffusion_img2img import StableDiffusionImg2ImgPipeline
- from .pipeline_stable_diffusion_inpaint import StableDiffusionInpaintPipeline
- from .pipeline_stable_diffusion_inpaint_legacy import StableDiffusionInpaintPipelineLegacy
- from .pipeline_stable_diffusion_instruct_pix2pix import StableDiffusionInstructPix2PixPipeline
- from .pipeline_stable_diffusion_latent_upscale import StableDiffusionLatentUpscalePipeline
- from .pipeline_stable_diffusion_ldm3d import StableDiffusionLDM3DPipeline
- from .pipeline_stable_diffusion_model_editing import StableDiffusionModelEditingPipeline
- from .pipeline_stable_diffusion_panorama import StableDiffusionPanoramaPipeline
- from .pipeline_stable_diffusion_paradigms import StableDiffusionParadigmsPipeline
- from .pipeline_stable_diffusion_sag import StableDiffusionSAGPipeline
- from .pipeline_stable_diffusion_upscale import StableDiffusionUpscalePipeline
- from .pipeline_stable_unclip import StableUnCLIPPipeline
- from .pipeline_stable_unclip_img2img import StableUnCLIPImg2ImgPipeline
- from .safety_checker import StableDiffusionSafetyChecker
- from .stable_unclip_image_normalizer import StableUnCLIPImageNormalizer
+ _import_structure["pipeline_cycle_diffusion"] = ["CycleDiffusionPipeline"]
+ _import_structure["pipeline_stable_diffusion"] = ["StableDiffusionPipeline"]
+ _import_structure["pipeline_stable_diffusion_attend_and_excite"] = ["StableDiffusionAttendAndExcitePipeline"]
+ _import_structure["pipeline_stable_diffusion_gligen"] = ["StableDiffusionGLIGENPipeline"]
+ _import_structure["pipeline_stable_diffusion_img2img"] = ["StableDiffusionImg2ImgPipeline"]
+ _import_structure["pipeline_stable_diffusion_inpaint"] = ["StableDiffusionInpaintPipeline"]
+ _import_structure["pipeline_stable_diffusion_inpaint_legacy"] = ["StableDiffusionInpaintPipelineLegacy"]
+ _import_structure["pipeline_stable_diffusion_instruct_pix2pix"] = ["StableDiffusionInstructPix2PixPipeline"]
+ _import_structure["pipeline_stable_diffusion_latent_upscale"] = ["StableDiffusionLatentUpscalePipeline"]
+ _import_structure["pipeline_stable_diffusion_ldm3d"] = ["StableDiffusionLDM3DPipeline"]
+ _import_structure["pipeline_stable_diffusion_model_editing"] = ["StableDiffusionModelEditingPipeline"]
+ _import_structure["pipeline_stable_diffusion_panorama"] = ["StableDiffusionPanoramaPipeline"]
+ _import_structure["pipeline_stable_diffusion_paradigms"] = ["StableDiffusionParadigmsPipeline"]
+ _import_structure["pipeline_stable_diffusion_sag"] = ["StableDiffusionSAGPipeline"]
+ _import_structure["pipeline_stable_diffusion_upscale"] = ["StableDiffusionUpscalePipeline"]
+ _import_structure["pipeline_stable_unclip"] = ["StableUnCLIPPipeline"]
+ _import_structure["pipeline_stable_unclip_img2img"] = ["StableUnCLIPImg2ImgPipeline"]
+ _import_structure["safety_checker"] = ["StableDiffusionSafetyChecker"]
+ _import_structure["stable_unclip_image_normalizer"] = ["StableUnCLIPImageNormalizer"]
+ _import_structure["pipeline_stable_diffusion_gligen_text_image"] = ["StableDiffusionGLIGENTextImagePipeline"]
+ _import_structure["pipeline_stable_diffusion_gligen"] = ["StableDiffusionGLIGENPipeline"]
+ _import_structure["clip_image_project_model"] = ["CLIPImageProjection"]
try:
if not (is_transformers_available() and is_torch_available() and is_transformers_version(">=", "4.25.0")):
raise OptionalDependencyNotAvailable()
except OptionalDependencyNotAvailable:
from ...utils.dummy_torch_and_transformers_objects import StableDiffusionImageVariationPipeline
+
+ _dummy_objects.update({"StableDiffusionImageVariationPipeline": StableDiffusionImageVariationPipeline})
else:
- from .pipeline_stable_diffusion_image_variation import StableDiffusionImageVariationPipeline
+ _import_structure["pipeline_stable_diffusion_image_variation"] = ["StableDiffusionImageVariationPipeline"]
try:
@@ -82,10 +73,18 @@ class StableDiffusionPipelineOutput(BaseOutput):
StableDiffusionDiffEditPipeline,
StableDiffusionPix2PixZeroPipeline,
)
+
+ _dummy_objects.update(
+ {
+ "StableDiffusionDepth2ImgPipeline": StableDiffusionDepth2ImgPipeline,
+ "StableDiffusionDiffEditPipeline": StableDiffusionDiffEditPipeline,
+ "StableDiffusionPix2PixZeroPipeline": StableDiffusionPix2PixZeroPipeline,
+ }
+ )
else:
- from .pipeline_stable_diffusion_depth2img import StableDiffusionDepth2ImgPipeline
- from .pipeline_stable_diffusion_diffedit import StableDiffusionDiffEditPipeline
- from .pipeline_stable_diffusion_pix2pix_zero import StableDiffusionPix2PixZeroPipeline
+ _import_structure["pipeline_stable_diffusion_depth2img"] = ["StableDiffusionDepth2ImgPipeline"]
+ _import_structure["pipeline_stable_diffusion_diffedit"] = ["StableDiffusionDiffEditPipeline"]
+ _import_structure["pipeline_stable_diffusion_pix2pix_zero"] = ["StableDiffusionPix2PixZeroPipeline"]
try:
@@ -97,43 +96,52 @@ class StableDiffusionPipelineOutput(BaseOutput):
):
raise OptionalDependencyNotAvailable()
except OptionalDependencyNotAvailable:
- from ...utils.dummy_torch_and_transformers_and_k_diffusion_objects import * # noqa F403
+ from ...utils import dummy_torch_and_transformers_and_k_diffusion_objects # noqa F403
+
+ _dummy_objects.update(get_objects_from_module(dummy_torch_and_transformers_and_k_diffusion_objects))
+
else:
- from .pipeline_stable_diffusion_k_diffusion import StableDiffusionKDiffusionPipeline
+ _import_structure["pipeline_stable_diffusion_k_diffusion"] = ["StableDiffusionKDiffusionPipeline"]
try:
if not (is_transformers_available() and is_onnx_available()):
raise OptionalDependencyNotAvailable()
except OptionalDependencyNotAvailable:
- from ...utils.dummy_onnx_objects import * # noqa F403
+ from ...utils import dummy_onnx_objects # noqa F403
+
+ _dummy_objects.update(get_objects_from_module(dummy_onnx_objects))
+
else:
- from .pipeline_onnx_stable_diffusion import OnnxStableDiffusionPipeline, StableDiffusionOnnxPipeline
- from .pipeline_onnx_stable_diffusion_img2img import OnnxStableDiffusionImg2ImgPipeline
- from .pipeline_onnx_stable_diffusion_inpaint import OnnxStableDiffusionInpaintPipeline
- from .pipeline_onnx_stable_diffusion_inpaint_legacy import OnnxStableDiffusionInpaintPipelineLegacy
- from .pipeline_onnx_stable_diffusion_upscale import OnnxStableDiffusionUpscalePipeline
+ _import_structure["pipeline_onnx_stable_diffusion"] = [
+ "OnnxStableDiffusionPipeline",
+ "StableDiffusionOnnxPipeline",
+ ]
+ _import_structure["pipeline_onnx_stable_diffusion_img2img"] = ["OnnxStableDiffusionImg2ImgPipeline"]
+ _import_structure["pipeline_onnx_stable_diffusion_inpaint"] = ["OnnxStableDiffusionInpaintPipeline"]
+ _import_structure["pipeline_onnx_stable_diffusion_inpaint_legacy"] = ["OnnxStableDiffusionInpaintPipelineLegacy"]
+ _import_structure["pipeline_onnx_stable_diffusion_upscale"] = ["OnnxStableDiffusionUpscalePipeline"]
if is_transformers_available() and is_flax_available():
- import flax
+ from ...schedulers.scheduling_pndm_flax import PNDMSchedulerState
- @flax.struct.dataclass
- class FlaxStableDiffusionPipelineOutput(BaseOutput):
- """
- Output class for Flax-based Stable Diffusion pipelines.
+ _additional_imports.update({"PNDMSchedulerState": PNDMSchedulerState})
- Args:
- images (`np.ndarray`):
- Denoised images of array shape of `(batch_size, height, width, num_channels)`.
- nsfw_content_detected (`List[bool]`):
- List indicating whether the corresponding generated image contains "not-safe-for-work" (nsfw) content
- or `None` if safety checking could not be performed.
- """
+ _import_structure["pipeline_flax_stable_diffusion"] = ["FlaxStableDiffusionPipeline"]
+ _import_structure["pipeline_flax_stable_diffusion_img2img"] = ["FlaxStableDiffusionImg2ImgPipeline"]
+ _import_structure["pipeline_flax_stable_diffusion_inpaint"] = ["FlaxStableDiffusionInpaintPipeline"]
+ _import_structure["safety_checker_flax"] = ["FlaxStableDiffusionSafetyChecker"]
- images: np.ndarray
- nsfw_content_detected: List[bool]
+import sys
- from ...schedulers.scheduling_pndm_flax import PNDMSchedulerState
- from .pipeline_flax_stable_diffusion import FlaxStableDiffusionPipeline
- from .pipeline_flax_stable_diffusion_img2img import FlaxStableDiffusionImg2ImgPipeline
- from .pipeline_flax_stable_diffusion_inpaint import FlaxStableDiffusionInpaintPipeline
- from .safety_checker_flax import FlaxStableDiffusionSafetyChecker
+
+sys.modules[__name__] = _LazyModule(
+ __name__,
+ globals()["__file__"],
+ _import_structure,
+ module_spec=__spec__,
+)
+
+for name, value in _dummy_objects.items():
+ setattr(sys.modules[__name__], name, value)
+for name, value in _additional_imports.items():
+ setattr(sys.modules[__name__], name, value)
diff --git a/src/diffusers/pipelines/stable_diffusion/pipeline_cycle_diffusion.py b/src/diffusers/pipelines/stable_diffusion/pipeline_cycle_diffusion.py
index 9a3b828828e3..6896ef94a3cf 100644
--- a/src/diffusers/pipelines/stable_diffusion/pipeline_cycle_diffusion.py
+++ b/src/diffusers/pipelines/stable_diffusion/pipeline_cycle_diffusion.py
@@ -29,7 +29,8 @@
from ...models import AutoencoderKL, UNet2DConditionModel
from ...models.lora import adjust_lora_scale_text_encoder
from ...schedulers import DDIMScheduler
-from ...utils import PIL_INTERPOLATION, deprecate, logging, randn_tensor
+from ...utils import PIL_INTERPOLATION, deprecate, logging
+from ...utils.torch_utils import randn_tensor
from ..pipeline_utils import DiffusionPipeline
from . import StableDiffusionPipelineOutput
from .safety_checker import StableDiffusionSafetyChecker
diff --git a/src/diffusers/pipelines/stable_diffusion/pipeline_output.py b/src/diffusers/pipelines/stable_diffusion/pipeline_output.py
new file mode 100644
index 000000000000..0ac9d9e1a039
--- /dev/null
+++ b/src/diffusers/pipelines/stable_diffusion/pipeline_output.py
@@ -0,0 +1,49 @@
+from dataclasses import dataclass
+from typing import List, Optional, Union
+
+import numpy as np
+import PIL
+
+from ...utils import (
+ BaseOutput,
+ is_flax_available,
+ is_transformers_available,
+)
+
+
+@dataclass
+class StableDiffusionPipelineOutput(BaseOutput):
+ """
+ Output class for Stable Diffusion pipelines.
+
+ Args:
+ images (`List[PIL.Image.Image]` or `np.ndarray`)
+ List of denoised PIL images of length `batch_size` or NumPy array of shape `(batch_size, height, width,
+ num_channels)`.
+ nsfw_content_detected (`List[bool]`)
+ List indicating whether the corresponding generated image contains "not-safe-for-work" (nsfw) content or
+ `None` if safety checking could not be performed.
+ """
+
+ images: Union[List[PIL.Image.Image], np.ndarray]
+ nsfw_content_detected: Optional[List[bool]]
+
+
+if is_transformers_available() and is_flax_available():
+ import flax
+
+ @flax.struct.dataclass
+ class FlaxStableDiffusionPipelineOutput(BaseOutput):
+ """
+ Output class for Flax-based Stable Diffusion pipelines.
+
+ Args:
+ images (`np.ndarray`):
+ Denoised images of array shape of `(batch_size, height, width, num_channels)`.
+ nsfw_content_detected (`List[bool]`):
+ List indicating whether the corresponding generated image contains "not-safe-for-work" (nsfw) content
+ or `None` if safety checking could not be performed.
+ """
+
+ images: np.ndarray
+ nsfw_content_detected: List[bool]
diff --git a/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py b/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py
index 6faec1f9a140..a84b316bbf62 100644
--- a/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py
+++ b/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py
@@ -30,9 +30,9 @@
is_accelerate_available,
is_accelerate_version,
logging,
- randn_tensor,
replace_example_docstring,
)
+from ...utils.torch_utils import randn_tensor
from ..pipeline_utils import DiffusionPipeline
from . import StableDiffusionPipelineOutput
from .safety_checker import StableDiffusionSafetyChecker
diff --git a/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_attend_and_excite.py b/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_attend_and_excite.py
index b5f94add9f18..d64e02e8ecd0 100644
--- a/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_attend_and_excite.py
+++ b/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_attend_and_excite.py
@@ -27,7 +27,8 @@
from ...models.attention_processor import Attention
from ...models.lora import adjust_lora_scale_text_encoder
from ...schedulers import KarrasDiffusionSchedulers
-from ...utils import deprecate, logging, randn_tensor, replace_example_docstring
+from ...utils import deprecate, logging, replace_example_docstring
+from ...utils.torch_utils import randn_tensor
from ..pipeline_utils import DiffusionPipeline
from . import StableDiffusionPipelineOutput
from .safety_checker import StableDiffusionSafetyChecker
diff --git a/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_depth2img.py b/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_depth2img.py
index 0ab0b85a46c2..3be87fe641f6 100644
--- a/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_depth2img.py
+++ b/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_depth2img.py
@@ -28,7 +28,8 @@
from ...models import AutoencoderKL, UNet2DConditionModel
from ...models.lora import adjust_lora_scale_text_encoder
from ...schedulers import KarrasDiffusionSchedulers
-from ...utils import PIL_INTERPOLATION, deprecate, logging, randn_tensor
+from ...utils import PIL_INTERPOLATION, deprecate, logging
+from ...utils.torch_utils import randn_tensor
from ..pipeline_utils import DiffusionPipeline, ImagePipelineOutput
diff --git a/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_diffedit.py b/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_diffedit.py
index 261dabe46754..13522fa780ca 100644
--- a/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_diffedit.py
+++ b/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_diffedit.py
@@ -35,9 +35,9 @@
is_accelerate_available,
is_accelerate_version,
logging,
- randn_tensor,
replace_example_docstring,
)
+from ...utils.torch_utils import randn_tensor
from ..pipeline_utils import DiffusionPipeline
from . import StableDiffusionPipelineOutput
from .safety_checker import StableDiffusionSafetyChecker
diff --git a/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_gligen.py b/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_gligen.py
index 78d0e852a632..7748896524c0 100644
--- a/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_gligen.py
+++ b/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_gligen.py
@@ -31,9 +31,9 @@
is_accelerate_available,
is_accelerate_version,
logging,
- randn_tensor,
replace_example_docstring,
)
+from ...utils.torch_utils import randn_tensor
from ..pipeline_utils import DiffusionPipeline
from . import StableDiffusionPipelineOutput
from .safety_checker import StableDiffusionSafetyChecker
diff --git a/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_gligen_text_image.py b/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_gligen_text_image.py
index 0940b830065c..01cef5438a1e 100644
--- a/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_gligen_text_image.py
+++ b/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_gligen_text_image.py
@@ -36,9 +36,9 @@
is_accelerate_available,
is_accelerate_version,
logging,
- randn_tensor,
replace_example_docstring,
)
+from ...utils.torch_utils import randn_tensor
from ..pipeline_utils import DiffusionPipeline
from . import StableDiffusionPipelineOutput
from .clip_image_project_model import CLIPImageProjection
diff --git a/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_image_variation.py b/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_image_variation.py
index d6214b8c041c..328e7165a188 100644
--- a/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_image_variation.py
+++ b/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_image_variation.py
@@ -24,7 +24,8 @@
from ...image_processor import VaeImageProcessor
from ...models import AutoencoderKL, UNet2DConditionModel
from ...schedulers import KarrasDiffusionSchedulers
-from ...utils import deprecate, logging, randn_tensor
+from ...utils import deprecate, logging
+from ...utils.torch_utils import randn_tensor
from ..pipeline_utils import DiffusionPipeline
from . import StableDiffusionPipelineOutput
from .safety_checker import StableDiffusionSafetyChecker
diff --git a/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_img2img.py b/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_img2img.py
index 5e7f5f01cb28..13d971de2844 100644
--- a/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_img2img.py
+++ b/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_img2img.py
@@ -33,9 +33,9 @@
is_accelerate_available,
is_accelerate_version,
logging,
- randn_tensor,
replace_example_docstring,
)
+from ...utils.torch_utils import randn_tensor
from ..pipeline_utils import DiffusionPipeline
from . import StableDiffusionPipelineOutput
from .safety_checker import StableDiffusionSafetyChecker
diff --git a/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_inpaint.py b/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_inpaint.py
index c1fb5831a305..a01442df5ce8 100644
--- a/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_inpaint.py
+++ b/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_inpaint.py
@@ -27,7 +27,8 @@
from ...models import AsymmetricAutoencoderKL, AutoencoderKL, UNet2DConditionModel
from ...models.lora import adjust_lora_scale_text_encoder
from ...schedulers import KarrasDiffusionSchedulers
-from ...utils import deprecate, is_accelerate_available, is_accelerate_version, logging, randn_tensor
+from ...utils import deprecate, is_accelerate_available, is_accelerate_version, logging
+from ...utils.torch_utils import randn_tensor
from ..pipeline_utils import DiffusionPipeline
from . import StableDiffusionPipelineOutput
from .safety_checker import StableDiffusionSafetyChecker
diff --git a/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_inpaint_legacy.py b/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_inpaint_legacy.py
index f5b60d95e543..3be6fc93e970 100644
--- a/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_inpaint_legacy.py
+++ b/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_inpaint_legacy.py
@@ -27,14 +27,8 @@
from ...models import AutoencoderKL, UNet2DConditionModel
from ...models.lora import adjust_lora_scale_text_encoder
from ...schedulers import KarrasDiffusionSchedulers
-from ...utils import (
- PIL_INTERPOLATION,
- deprecate,
- is_accelerate_available,
- is_accelerate_version,
- logging,
- randn_tensor,
-)
+from ...utils import PIL_INTERPOLATION, deprecate, is_accelerate_available, is_accelerate_version, logging
+from ...utils.torch_utils import randn_tensor
from ..pipeline_utils import DiffusionPipeline
from . import StableDiffusionPipelineOutput
from .safety_checker import StableDiffusionSafetyChecker
diff --git a/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_instruct_pix2pix.py b/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_instruct_pix2pix.py
index 8afaec267c9b..8ed36f771db9 100644
--- a/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_instruct_pix2pix.py
+++ b/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_instruct_pix2pix.py
@@ -24,14 +24,8 @@
from ...loaders import LoraLoaderMixin, TextualInversionLoaderMixin
from ...models import AutoencoderKL, UNet2DConditionModel
from ...schedulers import KarrasDiffusionSchedulers
-from ...utils import (
- PIL_INTERPOLATION,
- deprecate,
- is_accelerate_available,
- is_accelerate_version,
- logging,
- randn_tensor,
-)
+from ...utils import PIL_INTERPOLATION, deprecate, is_accelerate_available, is_accelerate_version, logging
+from ...utils.torch_utils import randn_tensor
from ..pipeline_utils import DiffusionPipeline
from . import StableDiffusionPipelineOutput
from .safety_checker import StableDiffusionSafetyChecker
diff --git a/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_k_diffusion.py b/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_k_diffusion.py
index 92e481e707c3..f4509cd4a960 100755
--- a/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_k_diffusion.py
+++ b/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_k_diffusion.py
@@ -24,7 +24,8 @@
from ...loaders import LoraLoaderMixin, TextualInversionLoaderMixin
from ...models.lora import adjust_lora_scale_text_encoder
from ...schedulers import LMSDiscreteScheduler
-from ...utils import deprecate, is_accelerate_available, is_accelerate_version, logging, randn_tensor
+from ...utils import deprecate, is_accelerate_available, is_accelerate_version, logging
+from ...utils.torch_utils import randn_tensor
from ..pipeline_utils import DiffusionPipeline
from . import StableDiffusionPipelineOutput
diff --git a/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_latent_upscale.py b/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_latent_upscale.py
index 323a583d4558..4141b65f5096 100644
--- a/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_latent_upscale.py
+++ b/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_latent_upscale.py
@@ -24,7 +24,8 @@
from ...image_processor import PipelineImageInput, VaeImageProcessor
from ...models import AutoencoderKL, UNet2DConditionModel
from ...schedulers import EulerDiscreteScheduler
-from ...utils import deprecate, logging, randn_tensor
+from ...utils import deprecate, logging
+from ...utils.torch_utils import randn_tensor
from ..pipeline_utils import DiffusionPipeline, ImagePipelineOutput
diff --git a/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_ldm3d.py b/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_ldm3d.py
index 13ccb226b0d7..3400497670c9 100644
--- a/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_ldm3d.py
+++ b/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_ldm3d.py
@@ -32,9 +32,9 @@
is_accelerate_available,
is_accelerate_version,
logging,
- randn_tensor,
replace_example_docstring,
)
+from ...utils.torch_utils import randn_tensor
from ..pipeline_utils import DiffusionPipeline
from .safety_checker import StableDiffusionSafetyChecker
diff --git a/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_model_editing.py b/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_model_editing.py
index 0b96a2cc8195..a92515cfb4a5 100644
--- a/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_model_editing.py
+++ b/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_model_editing.py
@@ -24,7 +24,8 @@
from ...models.lora import adjust_lora_scale_text_encoder
from ...schedulers import PNDMScheduler
from ...schedulers.scheduling_utils import SchedulerMixin
-from ...utils import deprecate, logging, randn_tensor
+from ...utils import deprecate, logging
+from ...utils.torch_utils import randn_tensor
from ..pipeline_utils import DiffusionPipeline
from . import StableDiffusionPipelineOutput
from .safety_checker import StableDiffusionSafetyChecker
diff --git a/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_panorama.py b/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_panorama.py
index 84bd9f7e8815..0956bfefa372 100644
--- a/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_panorama.py
+++ b/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_panorama.py
@@ -23,7 +23,8 @@
from ...models import AutoencoderKL, UNet2DConditionModel
from ...models.lora import adjust_lora_scale_text_encoder
from ...schedulers import DDIMScheduler
-from ...utils import deprecate, logging, randn_tensor, replace_example_docstring
+from ...utils import deprecate, logging, replace_example_docstring
+from ...utils.torch_utils import randn_tensor
from ..pipeline_utils import DiffusionPipeline
from . import StableDiffusionPipelineOutput
from .safety_checker import StableDiffusionSafetyChecker
diff --git a/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_paradigms.py b/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_paradigms.py
index 7ce3dfc35908..cf597ac062bf 100644
--- a/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_paradigms.py
+++ b/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_paradigms.py
@@ -28,9 +28,9 @@
is_accelerate_available,
is_accelerate_version,
logging,
- randn_tensor,
replace_example_docstring,
)
+from ...utils.torch_utils import randn_tensor
from ..pipeline_utils import DiffusionPipeline
from . import StableDiffusionPipelineOutput
from .safety_checker import StableDiffusionSafetyChecker
diff --git a/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_pix2pix_zero.py b/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_pix2pix_zero.py
index f2b281e8c6c7..be3ffa4071eb 100644
--- a/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_pix2pix_zero.py
+++ b/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_pix2pix_zero.py
@@ -42,9 +42,9 @@
is_accelerate_available,
is_accelerate_version,
logging,
- randn_tensor,
replace_example_docstring,
)
+from ...utils.torch_utils import randn_tensor
from ..pipeline_utils import DiffusionPipeline
from . import StableDiffusionPipelineOutput
from .safety_checker import StableDiffusionSafetyChecker
diff --git a/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_sag.py b/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_sag.py
index 539696e9d5b6..7580c11936c0 100644
--- a/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_sag.py
+++ b/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_sag.py
@@ -24,7 +24,8 @@
from ...models import AutoencoderKL, UNet2DConditionModel
from ...models.lora import adjust_lora_scale_text_encoder
from ...schedulers import KarrasDiffusionSchedulers
-from ...utils import deprecate, logging, randn_tensor, replace_example_docstring
+from ...utils import deprecate, logging, replace_example_docstring
+from ...utils.torch_utils import randn_tensor
from ..pipeline_utils import DiffusionPipeline
from . import StableDiffusionPipelineOutput
from .safety_checker import StableDiffusionSafetyChecker
diff --git a/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_upscale.py b/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_upscale.py
index d8700d582f5e..4e5e77a5e2db 100644
--- a/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_upscale.py
+++ b/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_upscale.py
@@ -32,7 +32,8 @@
)
from ...models.lora import adjust_lora_scale_text_encoder
from ...schedulers import DDPMScheduler, KarrasDiffusionSchedulers
-from ...utils import deprecate, is_accelerate_available, is_accelerate_version, logging, randn_tensor
+from ...utils import deprecate, is_accelerate_available, is_accelerate_version, logging
+from ...utils.torch_utils import randn_tensor
from ..pipeline_utils import DiffusionPipeline
from . import StableDiffusionPipelineOutput
diff --git a/src/diffusers/pipelines/stable_diffusion/pipeline_stable_unclip.py b/src/diffusers/pipelines/stable_diffusion/pipeline_stable_unclip.py
index 10207d0ba32d..2ac9a52570ca 100644
--- a/src/diffusers/pipelines/stable_diffusion/pipeline_stable_unclip.py
+++ b/src/diffusers/pipelines/stable_diffusion/pipeline_stable_unclip.py
@@ -30,9 +30,9 @@
is_accelerate_available,
is_accelerate_version,
logging,
- randn_tensor,
replace_example_docstring,
)
+from ...utils.torch_utils import randn_tensor
from ..pipeline_utils import DiffusionPipeline, ImagePipelineOutput
from .stable_unclip_image_normalizer import StableUnCLIPImageNormalizer
diff --git a/src/diffusers/pipelines/stable_diffusion/pipeline_stable_unclip_img2img.py b/src/diffusers/pipelines/stable_diffusion/pipeline_stable_unclip_img2img.py
index 1a7427c21bc5..dae0846ea64b 100644
--- a/src/diffusers/pipelines/stable_diffusion/pipeline_stable_unclip_img2img.py
+++ b/src/diffusers/pipelines/stable_diffusion/pipeline_stable_unclip_img2img.py
@@ -27,7 +27,8 @@
from ...models.embeddings import get_timestep_embedding
from ...models.lora import adjust_lora_scale_text_encoder
from ...schedulers import KarrasDiffusionSchedulers
-from ...utils import deprecate, is_accelerate_version, logging, randn_tensor, replace_example_docstring
+from ...utils import deprecate, is_accelerate_version, logging, replace_example_docstring
+from ...utils.torch_utils import randn_tensor
from ..pipeline_utils import DiffusionPipeline, ImagePipelineOutput
from .stable_unclip_image_normalizer import StableUnCLIPImageNormalizer
diff --git a/src/diffusers/pipelines/stable_diffusion_safe/pipeline_stable_diffusion_safe.py b/src/diffusers/pipelines/stable_diffusion_safe/pipeline_stable_diffusion_safe.py
index 10bb7418e2c3..88b6e29f4b21 100644
--- a/src/diffusers/pipelines/stable_diffusion_safe/pipeline_stable_diffusion_safe.py
+++ b/src/diffusers/pipelines/stable_diffusion_safe/pipeline_stable_diffusion_safe.py
@@ -10,7 +10,8 @@
from ...configuration_utils import FrozenDict
from ...models import AutoencoderKL, UNet2DConditionModel
from ...schedulers import KarrasDiffusionSchedulers
-from ...utils import deprecate, logging, randn_tensor
+from ...utils import deprecate, logging
+from ...utils.torch_utils import randn_tensor
from ..pipeline_utils import DiffusionPipeline
from . import StableDiffusionSafePipelineOutput
from .safety_checker import SafeStableDiffusionSafetyChecker
diff --git a/src/diffusers/pipelines/stable_diffusion_xl/__init__.py b/src/diffusers/pipelines/stable_diffusion_xl/__init__.py
index 02bd96cfc23c..ebe12db15fd9 100644
--- a/src/diffusers/pipelines/stable_diffusion_xl/__init__.py
+++ b/src/diffusers/pipelines/stable_diffusion_xl/__init__.py
@@ -1,38 +1,39 @@
-from dataclasses import dataclass
-from typing import List, Optional, Union
-
-import numpy as np
-import PIL
-
from ...utils import (
- BaseOutput,
OptionalDependencyNotAvailable,
+ _LazyModule,
+ get_objects_from_module,
is_torch_available,
is_transformers_available,
)
-@dataclass
-class StableDiffusionXLPipelineOutput(BaseOutput):
- """
- Output class for Stable Diffusion pipelines.
-
- Args:
- images (`List[PIL.Image.Image]` or `np.ndarray`)
- List of denoised PIL images of length `batch_size` or numpy array of shape `(batch_size, height, width,
- num_channels)`. PIL images or numpy array present the denoised images of the diffusion pipeline.
- """
-
- images: Union[List[PIL.Image.Image], np.ndarray]
+_import_structure = {}
+_dummy_objects = {}
+_import_structure["pipeline_output"] = ["StableDiffusionXLPipelineOutput"]
try:
if not (is_transformers_available() and is_torch_available()):
raise OptionalDependencyNotAvailable()
except OptionalDependencyNotAvailable:
- from ...utils.dummy_torch_and_transformers_objects import * # noqa F403
+ from ...utils import dummy_torch_and_transformers_objects # noqa F403
+
+ _dummy_objects.update(get_objects_from_module(dummy_torch_and_transformers_objects))
else:
- from .pipeline_stable_diffusion_xl import StableDiffusionXLPipeline
- from .pipeline_stable_diffusion_xl_img2img import StableDiffusionXLImg2ImgPipeline
- from .pipeline_stable_diffusion_xl_inpaint import StableDiffusionXLInpaintPipeline
- from .pipeline_stable_diffusion_xl_instruct_pix2pix import StableDiffusionXLInstructPix2PixPipeline
+ _import_structure["pipeline_stable_diffusion_xl"] = ["StableDiffusionXLPipeline"]
+ _import_structure["pipeline_stable_diffusion_xl_img2img"] = ["StableDiffusionXLImg2ImgPipeline"]
+ _import_structure["pipeline_stable_diffusion_xl_inpaint"] = ["StableDiffusionXLInpaintPipeline"]
+ _import_structure["pipeline_stable_diffusion_xl_instruct_pix2pix"] = ["StableDiffusionXLInstructPix2PixPipeline"]
+
+import sys
+
+
+sys.modules[__name__] = _LazyModule(
+ __name__,
+ globals()["__file__"],
+ _import_structure,
+ module_spec=__spec__,
+)
+
+for name, value in _dummy_objects.items():
+ setattr(sys.modules[__name__], name, value)
diff --git a/src/diffusers/pipelines/stable_diffusion_xl/pipeline_output.py b/src/diffusers/pipelines/stable_diffusion_xl/pipeline_output.py
new file mode 100644
index 000000000000..0c9515da34ef
--- /dev/null
+++ b/src/diffusers/pipelines/stable_diffusion_xl/pipeline_output.py
@@ -0,0 +1,21 @@
+from dataclasses import dataclass
+from typing import List, Union
+
+import numpy as np
+import PIL
+
+from ...utils import BaseOutput
+
+
+@dataclass
+class StableDiffusionXLPipelineOutput(BaseOutput):
+ """
+ Output class for Stable Diffusion pipelines.
+
+ Args:
+ images (`List[PIL.Image.Image]` or `np.ndarray`)
+ List of denoised PIL images of length `batch_size` or numpy array of shape `(batch_size, height, width,
+ num_channels)`. PIL images or numpy array present the denoised images of the diffusion pipeline.
+ """
+
+ images: Union[List[PIL.Image.Image], np.ndarray]
diff --git a/src/diffusers/pipelines/stable_diffusion_xl/pipeline_stable_diffusion_xl.py b/src/diffusers/pipelines/stable_diffusion_xl/pipeline_stable_diffusion_xl.py
index 7b7755085ed6..81c783bdfd2f 100644
--- a/src/diffusers/pipelines/stable_diffusion_xl/pipeline_stable_diffusion_xl.py
+++ b/src/diffusers/pipelines/stable_diffusion_xl/pipeline_stable_diffusion_xl.py
@@ -39,9 +39,9 @@
is_accelerate_version,
is_invisible_watermark_available,
logging,
- randn_tensor,
replace_example_docstring,
)
+from ...utils.torch_utils import randn_tensor
from ..pipeline_utils import DiffusionPipeline
from . import StableDiffusionXLPipelineOutput
diff --git a/src/diffusers/pipelines/stable_diffusion_xl/pipeline_stable_diffusion_xl_img2img.py b/src/diffusers/pipelines/stable_diffusion_xl/pipeline_stable_diffusion_xl_img2img.py
index 04902234d54e..5af3b07f28a3 100644
--- a/src/diffusers/pipelines/stable_diffusion_xl/pipeline_stable_diffusion_xl_img2img.py
+++ b/src/diffusers/pipelines/stable_diffusion_xl/pipeline_stable_diffusion_xl_img2img.py
@@ -36,9 +36,9 @@
is_accelerate_version,
is_invisible_watermark_available,
logging,
- randn_tensor,
replace_example_docstring,
)
+from ...utils.torch_utils import randn_tensor
from ..pipeline_utils import DiffusionPipeline
from . import StableDiffusionXLPipelineOutput
diff --git a/src/diffusers/pipelines/stable_diffusion_xl/pipeline_stable_diffusion_xl_inpaint.py b/src/diffusers/pipelines/stable_diffusion_xl/pipeline_stable_diffusion_xl_inpaint.py
index 1d86dff702ef..c47b53b53bef 100644
--- a/src/diffusers/pipelines/stable_diffusion_xl/pipeline_stable_diffusion_xl_inpaint.py
+++ b/src/diffusers/pipelines/stable_diffusion_xl/pipeline_stable_diffusion_xl_inpaint.py
@@ -38,9 +38,9 @@
is_accelerate_version,
is_invisible_watermark_available,
logging,
- randn_tensor,
replace_example_docstring,
)
+from ...utils.torch_utils import randn_tensor
from ..pipeline_utils import DiffusionPipeline
from . import StableDiffusionXLPipelineOutput
diff --git a/src/diffusers/pipelines/stable_diffusion_xl/pipeline_stable_diffusion_xl_instruct_pix2pix.py b/src/diffusers/pipelines/stable_diffusion_xl/pipeline_stable_diffusion_xl_instruct_pix2pix.py
index fe9fc1a53d32..c283f5bade68 100644
--- a/src/diffusers/pipelines/stable_diffusion_xl/pipeline_stable_diffusion_xl_instruct_pix2pix.py
+++ b/src/diffusers/pipelines/stable_diffusion_xl/pipeline_stable_diffusion_xl_instruct_pix2pix.py
@@ -36,9 +36,9 @@
is_accelerate_version,
is_invisible_watermark_available,
logging,
- randn_tensor,
replace_example_docstring,
)
+from ...utils.torch_utils import randn_tensor
from ..pipeline_utils import DiffusionPipeline
from . import StableDiffusionXLPipelineOutput
diff --git a/src/diffusers/pipelines/stochastic_karras_ve/__init__.py b/src/diffusers/pipelines/stochastic_karras_ve/__init__.py
index 5a63c1d24afb..2f82b438c5e3 100644
--- a/src/diffusers/pipelines/stochastic_karras_ve/__init__.py
+++ b/src/diffusers/pipelines/stochastic_karras_ve/__init__.py
@@ -1 +1,15 @@
-from .pipeline_stochastic_karras_ve import KarrasVePipeline
+from ...utils import _LazyModule
+
+
+_import_structure = {}
+_import_structure["pipeline_stochastic_karras_ve"] = ["KarrasVePipeline"]
+
+import sys
+
+
+sys.modules[__name__] = _LazyModule(
+ __name__,
+ globals()["__file__"],
+ _import_structure,
+ module_spec=__spec__,
+)
diff --git a/src/diffusers/pipelines/stochastic_karras_ve/pipeline_stochastic_karras_ve.py b/src/diffusers/pipelines/stochastic_karras_ve/pipeline_stochastic_karras_ve.py
index 61b5ed2d160f..d850f5a73351 100644
--- a/src/diffusers/pipelines/stochastic_karras_ve/pipeline_stochastic_karras_ve.py
+++ b/src/diffusers/pipelines/stochastic_karras_ve/pipeline_stochastic_karras_ve.py
@@ -18,7 +18,7 @@
from ...models import UNet2DModel
from ...schedulers import KarrasVeScheduler
-from ...utils import randn_tensor
+from ...utils.torch_utils import randn_tensor
from ..pipeline_utils import DiffusionPipeline, ImagePipelineOutput
diff --git a/src/diffusers/pipelines/t2i_adapter/__init__.py b/src/diffusers/pipelines/t2i_adapter/__init__.py
index a9a81df36a1a..b6e6ee724a67 100644
--- a/src/diffusers/pipelines/t2i_adapter/__init__.py
+++ b/src/diffusers/pipelines/t2i_adapter/__init__.py
@@ -1,15 +1,34 @@
from ...utils import (
OptionalDependencyNotAvailable,
+ _LazyModule,
+ get_objects_from_module,
is_torch_available,
is_transformers_available,
)
+_import_structure = {}
+_dummy_objects = {}
+
try:
if not (is_transformers_available() and is_torch_available()):
raise OptionalDependencyNotAvailable()
except OptionalDependencyNotAvailable:
- from ...utils.dummy_torch_and_transformers_objects import * # noqa F403
+ from ...utils import dummy_torch_and_transformers_objects # noqa F403
+
+ _dummy_objects.update(get_objects_from_module(dummy_torch_and_transformers_objects))
else:
- from .pipeline_stable_diffusion_adapter import StableDiffusionAdapterPipeline
- from .pipeline_stable_diffusion_xl_adapter import StableDiffusionXLAdapterPipeline
+ _import_structure["pipeline_stable_diffusion_adapter"] = ["StableDiffusionAdapterPipeline"]
+ _import_structure["pipeline_stable_diffusion_xl_adapter"] = ["StableDiffusionXLAdapterPipeline"]
+
+import sys
+
+
+sys.modules[__name__] = _LazyModule(
+ __name__,
+ globals()["__file__"],
+ _import_structure,
+ module_spec=__spec__,
+)
+for name, value in _dummy_objects.items():
+ setattr(sys.modules[__name__], name, value)
diff --git a/src/diffusers/pipelines/t2i_adapter/pipeline_stable_diffusion_adapter.py b/src/diffusers/pipelines/t2i_adapter/pipeline_stable_diffusion_adapter.py
index 93b5f3b25d8b..8884c94eb72e 100644
--- a/src/diffusers/pipelines/t2i_adapter/pipeline_stable_diffusion_adapter.py
+++ b/src/diffusers/pipelines/t2i_adapter/pipeline_stable_diffusion_adapter.py
@@ -33,9 +33,9 @@
is_accelerate_available,
is_accelerate_version,
logging,
- randn_tensor,
replace_example_docstring,
)
+from ...utils.torch_utils import randn_tensor
from ..pipeline_utils import DiffusionPipeline
from ..stable_diffusion.safety_checker import StableDiffusionSafetyChecker
diff --git a/src/diffusers/pipelines/t2i_adapter/pipeline_stable_diffusion_xl_adapter.py b/src/diffusers/pipelines/t2i_adapter/pipeline_stable_diffusion_xl_adapter.py
index 9bb8569e331d..00640facf604 100644
--- a/src/diffusers/pipelines/t2i_adapter/pipeline_stable_diffusion_xl_adapter.py
+++ b/src/diffusers/pipelines/t2i_adapter/pipeline_stable_diffusion_xl_adapter.py
@@ -38,9 +38,9 @@
is_accelerate_available,
is_accelerate_version,
logging,
- randn_tensor,
replace_example_docstring,
)
+from ...utils.torch_utils import randn_tensor
from ..pipeline_utils import DiffusionPipeline
diff --git a/src/diffusers/pipelines/text_to_video_synthesis/__init__.py b/src/diffusers/pipelines/text_to_video_synthesis/__init__.py
index 97683885aac9..af3b9bfde1ce 100644
--- a/src/diffusers/pipelines/text_to_video_synthesis/__init__.py
+++ b/src/diffusers/pipelines/text_to_video_synthesis/__init__.py
@@ -1,32 +1,35 @@
-from dataclasses import dataclass
-from typing import List, Optional, Union
+from ...utils import (
+ OptionalDependencyNotAvailable,
+ _LazyModule,
+ get_objects_from_module,
+ is_torch_available,
+ is_transformers_available,
+)
-import numpy as np
-import torch
-
-from ...utils import BaseOutput, OptionalDependencyNotAvailable, is_torch_available, is_transformers_available
-
-
-@dataclass
-class TextToVideoSDPipelineOutput(BaseOutput):
- """
- Output class for text-to-video pipelines.
-
- Args:
- frames (`List[np.ndarray]` or `torch.FloatTensor`)
- List of denoised frames (essentially images) as NumPy arrays of shape `(height, width, num_channels)` or as
- a `torch` tensor. The length of the list denotes the video length (the number of frames).
- """
-
- frames: Union[List[np.ndarray], torch.FloatTensor]
+_import_structure = {}
+_dummy_objects = {}
try:
if not (is_transformers_available() and is_torch_available()):
raise OptionalDependencyNotAvailable()
except OptionalDependencyNotAvailable:
- from ...utils.dummy_torch_and_transformers_objects import * # noqa F403
+ from ...utils import dummy_torch_and_transformers_objects # noqa F403
+
+ _dummy_objects.update(get_objects_from_module(dummy_torch_and_transformers_objects))
+
else:
- from .pipeline_text_to_video_synth import TextToVideoSDPipeline
- from .pipeline_text_to_video_synth_img2img import VideoToVideoSDPipeline # noqa: F401
- from .pipeline_text_to_video_zero import TextToVideoZeroPipeline
+ _import_structure["pipeline_output"] = ["TextToVideoSDPipelineOutput"]
+ _import_structure["pipeline_text_to_video_synth"] = ["TextToVideoSDPipeline"]
+ _import_structure["pipeline_text_to_video_synth_img2img"] = ["VideoToVideoSDPipeline"]
+ _import_structure["pipeline_text_to_video_zero"] = ["TextToVideoZeroPipeline"]
+
+import sys
+
+
+sys.modules[__name__] = _LazyModule(
+ __name__,
+ globals()["__file__"],
+ _import_structure,
+ module_spec=__spec__,
+)
diff --git a/src/diffusers/pipelines/text_to_video_synthesis/pipeline_output.py b/src/diffusers/pipelines/text_to_video_synthesis/pipeline_output.py
new file mode 100644
index 000000000000..411515809e6f
--- /dev/null
+++ b/src/diffusers/pipelines/text_to_video_synthesis/pipeline_output.py
@@ -0,0 +1,23 @@
+from dataclasses import dataclass
+from typing import List, Union
+
+import numpy as np
+import torch
+
+from ...utils import (
+ BaseOutput,
+)
+
+
+@dataclass
+class TextToVideoSDPipelineOutput(BaseOutput):
+ """
+ Output class for text-to-video pipelines.
+
+ Args:
+ frames (`List[np.ndarray]` or `torch.FloatTensor`)
+ List of denoised frames (essentially images) as NumPy arrays of shape `(height, width, num_channels)` or as
+ a `torch` tensor. The length of the list denotes the video length (the number of frames).
+ """
+
+ frames: Union[List[np.ndarray], torch.FloatTensor]
diff --git a/src/diffusers/pipelines/text_to_video_synthesis/pipeline_text_to_video_synth.py b/src/diffusers/pipelines/text_to_video_synthesis/pipeline_text_to_video_synth.py
index 72063769c868..678c2fbff438 100644
--- a/src/diffusers/pipelines/text_to_video_synthesis/pipeline_text_to_video_synth.py
+++ b/src/diffusers/pipelines/text_to_video_synthesis/pipeline_text_to_video_synth.py
@@ -28,9 +28,9 @@
is_accelerate_available,
is_accelerate_version,
logging,
- randn_tensor,
replace_example_docstring,
)
+from ...utils.torch_utils import randn_tensor
from ..pipeline_utils import DiffusionPipeline
from . import TextToVideoSDPipelineOutput
diff --git a/src/diffusers/pipelines/text_to_video_synthesis/pipeline_text_to_video_synth_img2img.py b/src/diffusers/pipelines/text_to_video_synthesis/pipeline_text_to_video_synth_img2img.py
index cb0c24c474a4..b7a4bfdd8859 100644
--- a/src/diffusers/pipelines/text_to_video_synthesis/pipeline_text_to_video_synth_img2img.py
+++ b/src/diffusers/pipelines/text_to_video_synthesis/pipeline_text_to_video_synth_img2img.py
@@ -29,9 +29,9 @@
is_accelerate_available,
is_accelerate_version,
logging,
- randn_tensor,
replace_example_docstring,
)
+from ...utils.torch_utils import randn_tensor
from ..pipeline_utils import DiffusionPipeline
from . import TextToVideoSDPipelineOutput
diff --git a/src/diffusers/pipelines/unclip/__init__.py b/src/diffusers/pipelines/unclip/__init__.py
index 075e66bb680a..f546dbb5041d 100644
--- a/src/diffusers/pipelines/unclip/__init__.py
+++ b/src/diffusers/pipelines/unclip/__init__.py
@@ -1,17 +1,38 @@
from ...utils import (
OptionalDependencyNotAvailable,
+ _LazyModule,
is_torch_available,
is_transformers_available,
is_transformers_version,
)
+_import_structure = {}
+_dummy_objects = {}
+
+
try:
if not (is_transformers_available() and is_torch_available() and is_transformers_version(">=", "4.25.0")):
raise OptionalDependencyNotAvailable()
except OptionalDependencyNotAvailable:
from ...utils.dummy_torch_and_transformers_objects import UnCLIPImageVariationPipeline, UnCLIPPipeline
+
+ _dummy_objects.update(
+ {"UnCLIPImageVariationPipeline": UnCLIPImageVariationPipeline, "UnCLIPPipeline": UnCLIPPipeline}
+ )
else:
- from .pipeline_unclip import UnCLIPPipeline
- from .pipeline_unclip_image_variation import UnCLIPImageVariationPipeline
- from .text_proj import UnCLIPTextProjModel
+ _import_structure["pipeline_unclip"] = ["UnCLIPPipeline"]
+ _import_structure["pipeline_unclip_image_variation"] = ["UnCLIPImageVariationPipeline"]
+ _import_structure["text_proj"] = ["UnCLIPTextProjModel"]
+
+import sys
+
+
+sys.modules[__name__] = _LazyModule(
+ __name__,
+ globals()["__file__"],
+ _import_structure,
+ module_spec=__spec__,
+)
+for name, value in _dummy_objects.items():
+ setattr(sys.modules[__name__], name, value)
diff --git a/src/diffusers/pipelines/unclip/pipeline_unclip.py b/src/diffusers/pipelines/unclip/pipeline_unclip.py
index 92d42bf0c75e..7e8dc22f6ca2 100644
--- a/src/diffusers/pipelines/unclip/pipeline_unclip.py
+++ b/src/diffusers/pipelines/unclip/pipeline_unclip.py
@@ -22,7 +22,8 @@
from ...models import PriorTransformer, UNet2DConditionModel, UNet2DModel
from ...schedulers import UnCLIPScheduler
-from ...utils import logging, randn_tensor
+from ...utils import logging
+from ...utils.torch_utils import randn_tensor
from ..pipeline_utils import DiffusionPipeline, ImagePipelineOutput
from .text_proj import UnCLIPTextProjModel
diff --git a/src/diffusers/pipelines/unclip/pipeline_unclip_image_variation.py b/src/diffusers/pipelines/unclip/pipeline_unclip_image_variation.py
index f22ede9dede9..8ec917f9e297 100644
--- a/src/diffusers/pipelines/unclip/pipeline_unclip_image_variation.py
+++ b/src/diffusers/pipelines/unclip/pipeline_unclip_image_variation.py
@@ -27,7 +27,8 @@
from ...models import UNet2DConditionModel, UNet2DModel
from ...schedulers import UnCLIPScheduler
-from ...utils import logging, randn_tensor
+from ...utils import logging
+from ...utils.torch_utils import randn_tensor
from ..pipeline_utils import DiffusionPipeline, ImagePipelineOutput
from .text_proj import UnCLIPTextProjModel
diff --git a/src/diffusers/pipelines/unidiffuser/__init__.py b/src/diffusers/pipelines/unidiffuser/__init__.py
index a774e3274030..ac0207b6045d 100644
--- a/src/diffusers/pipelines/unidiffuser/__init__.py
+++ b/src/diffusers/pipelines/unidiffuser/__init__.py
@@ -1,11 +1,15 @@
from ...utils import (
OptionalDependencyNotAvailable,
+ _LazyModule,
is_torch_available,
is_transformers_available,
- is_transformers_version,
)
+_import_structure = {}
+_dummy_objects = {}
+
+
try:
if not (is_transformers_available() and is_torch_available()):
raise OptionalDependencyNotAvailable()
@@ -14,7 +18,25 @@
ImageTextPipelineOutput,
UniDiffuserPipeline,
)
+
+ _dummy_objects.update(
+ {"ImageTextPipelineOutput": ImageTextPipelineOutput, "UniDiffuserPipeline": UniDiffuserPipeline}
+ )
+
else:
- from .modeling_text_decoder import UniDiffuserTextDecoder
- from .modeling_uvit import UniDiffuserModel, UTransformer2DModel
- from .pipeline_unidiffuser import ImageTextPipelineOutput, UniDiffuserPipeline
+ _import_structure["modeling_text_decoder"] = ["UniDiffuserTextDecoder"]
+ _import_structure["modeling_uvit"] = ["UniDiffuserModel", "UTransformer2DModel"]
+ _import_structure["pipeline_unidiffuser"] = ["ImageTextPipelineOutput", "UniDiffuserPipeline"]
+
+import sys
+
+
+sys.modules[__name__] = _LazyModule(
+ __name__,
+ globals()["__file__"],
+ _import_structure,
+ module_spec=__spec__,
+)
+
+for name, value in _dummy_objects.items():
+ setattr(sys.modules[__name__], name, value)
diff --git a/src/diffusers/pipelines/unidiffuser/pipeline_unidiffuser.py b/src/diffusers/pipelines/unidiffuser/pipeline_unidiffuser.py
index 670c915c6de1..2fcb89734089 100644
--- a/src/diffusers/pipelines/unidiffuser/pipeline_unidiffuser.py
+++ b/src/diffusers/pipelines/unidiffuser/pipeline_unidiffuser.py
@@ -15,15 +15,9 @@
from ...models import AutoencoderKL
from ...schedulers import KarrasDiffusionSchedulers
-from ...utils import (
- PIL_INTERPOLATION,
- deprecate,
- is_accelerate_available,
- is_accelerate_version,
- logging,
- randn_tensor,
-)
+from ...utils import PIL_INTERPOLATION, deprecate, is_accelerate_available, is_accelerate_version, logging
from ...utils.outputs import BaseOutput
+from ...utils.torch_utils import randn_tensor
from ..pipeline_utils import DiffusionPipeline
from .modeling_text_decoder import UniDiffuserTextDecoder
from .modeling_uvit import UniDiffuserModel
diff --git a/src/diffusers/pipelines/versatile_diffusion/__init__.py b/src/diffusers/pipelines/versatile_diffusion/__init__.py
index abf9dcff59db..8fbe932b18a6 100644
--- a/src/diffusers/pipelines/versatile_diffusion/__init__.py
+++ b/src/diffusers/pipelines/versatile_diffusion/__init__.py
@@ -1,11 +1,16 @@
from ...utils import (
OptionalDependencyNotAvailable,
+ _LazyModule,
is_torch_available,
is_transformers_available,
is_transformers_version,
)
+_import_structure = {}
+_dummy_objects = {}
+
+
try:
if not (is_transformers_available() and is_torch_available() and is_transformers_version(">=", "4.25.0")):
raise OptionalDependencyNotAvailable()
@@ -16,9 +21,31 @@
VersatileDiffusionPipeline,
VersatileDiffusionTextToImagePipeline,
)
+
+ _dummy_objects.update(
+ {
+ "VersatileDiffusionDualGuidedPipeline": VersatileDiffusionDualGuidedPipeline,
+ "VersatileDiffusionImageVariationPipeline": VersatileDiffusionImageVariationPipeline,
+ "VersatileDiffusionPipeline": VersatileDiffusionPipeline,
+ "VersatileDiffusionTextToImagePipeline": VersatileDiffusionTextToImagePipeline,
+ }
+ )
else:
- from .modeling_text_unet import UNetFlatConditionModel
- from .pipeline_versatile_diffusion import VersatileDiffusionPipeline
- from .pipeline_versatile_diffusion_dual_guided import VersatileDiffusionDualGuidedPipeline
- from .pipeline_versatile_diffusion_image_variation import VersatileDiffusionImageVariationPipeline
- from .pipeline_versatile_diffusion_text_to_image import VersatileDiffusionTextToImagePipeline
+ _import_structure["modeling_text_unet"] = ["UNetFlatConditionModel"]
+ _import_structure["pipeline_versatile_diffusion"] = ["VersatileDiffusionPipeline"]
+ _import_structure["pipeline_versatile_diffusion_dual_guided"] = ["VersatileDiffusionDualGuidedPipeline"]
+ _import_structure["pipeline_versatile_diffusion_image_variation"] = ["VersatileDiffusionImageVariationPipeline"]
+ _import_structure["pipeline_versatile_diffusion_text_to_image"] = ["VersatileDiffusionTextToImagePipeline"]
+
+import sys
+
+
+sys.modules[__name__] = _LazyModule(
+ __name__,
+ globals()["__file__"],
+ _import_structure,
+ module_spec=__spec__,
+)
+
+for name, value in _dummy_objects.items():
+ setattr(sys.modules[__name__], name, value)
diff --git a/src/diffusers/pipelines/versatile_diffusion/pipeline_versatile_diffusion_dual_guided.py b/src/diffusers/pipelines/versatile_diffusion/pipeline_versatile_diffusion_dual_guided.py
index 9bd724429e5d..cbb91e8a9e9a 100644
--- a/src/diffusers/pipelines/versatile_diffusion/pipeline_versatile_diffusion_dual_guided.py
+++ b/src/diffusers/pipelines/versatile_diffusion/pipeline_versatile_diffusion_dual_guided.py
@@ -29,7 +29,8 @@
from ...image_processor import VaeImageProcessor
from ...models import AutoencoderKL, DualTransformer2DModel, Transformer2DModel, UNet2DConditionModel
from ...schedulers import KarrasDiffusionSchedulers
-from ...utils import deprecate, logging, randn_tensor
+from ...utils import deprecate, logging
+from ...utils.torch_utils import randn_tensor
from ..pipeline_utils import DiffusionPipeline, ImagePipelineOutput
from .modeling_text_unet import UNetFlatConditionModel
diff --git a/src/diffusers/pipelines/versatile_diffusion/pipeline_versatile_diffusion_image_variation.py b/src/diffusers/pipelines/versatile_diffusion/pipeline_versatile_diffusion_image_variation.py
index da6f9bf23589..f06aa4b45d4d 100644
--- a/src/diffusers/pipelines/versatile_diffusion/pipeline_versatile_diffusion_image_variation.py
+++ b/src/diffusers/pipelines/versatile_diffusion/pipeline_versatile_diffusion_image_variation.py
@@ -24,7 +24,8 @@
from ...image_processor import VaeImageProcessor
from ...models import AutoencoderKL, UNet2DConditionModel
from ...schedulers import KarrasDiffusionSchedulers
-from ...utils import deprecate, logging, randn_tensor
+from ...utils import deprecate, logging
+from ...utils.torch_utils import randn_tensor
from ..pipeline_utils import DiffusionPipeline, ImagePipelineOutput
diff --git a/src/diffusers/pipelines/versatile_diffusion/pipeline_versatile_diffusion_text_to_image.py b/src/diffusers/pipelines/versatile_diffusion/pipeline_versatile_diffusion_text_to_image.py
index a443bc9d2225..f2d3aebce2b6 100644
--- a/src/diffusers/pipelines/versatile_diffusion/pipeline_versatile_diffusion_text_to_image.py
+++ b/src/diffusers/pipelines/versatile_diffusion/pipeline_versatile_diffusion_text_to_image.py
@@ -22,7 +22,8 @@
from ...image_processor import VaeImageProcessor
from ...models import AutoencoderKL, Transformer2DModel, UNet2DConditionModel
from ...schedulers import KarrasDiffusionSchedulers
-from ...utils import deprecate, logging, randn_tensor
+from ...utils import deprecate, logging
+from ...utils.torch_utils import randn_tensor
from ..pipeline_utils import DiffusionPipeline, ImagePipelineOutput
from .modeling_text_unet import UNetFlatConditionModel
diff --git a/src/diffusers/pipelines/vq_diffusion/__init__.py b/src/diffusers/pipelines/vq_diffusion/__init__.py
index da60bf73ad42..8917802c2694 100644
--- a/src/diffusers/pipelines/vq_diffusion/__init__.py
+++ b/src/diffusers/pipelines/vq_diffusion/__init__.py
@@ -1,10 +1,39 @@
-from ...utils import OptionalDependencyNotAvailable, is_torch_available, is_transformers_available
+from ...utils import (
+ OptionalDependencyNotAvailable,
+ _LazyModule,
+ is_torch_available,
+ is_transformers_available,
+)
+
+
+_import_structure = {}
+_dummy_objects = {}
try:
if not (is_transformers_available() and is_torch_available()):
raise OptionalDependencyNotAvailable()
except OptionalDependencyNotAvailable:
- from ...utils.dummy_torch_and_transformers_objects import *
+ from ...utils.dummy_torch_and_transformers_objects import (
+ LearnedClassifierFreeSamplingEmbeddings,
+ VQDiffusionPipeline,
+ )
+
+ _dummy_objects.update(
+ {
+ "LearnedClassifierFreeSamplingEmbeddings": LearnedClassifierFreeSamplingEmbeddings,
+ "VQDiffusionPipeline": VQDiffusionPipeline,
+ }
+ )
else:
- from .pipeline_vq_diffusion import LearnedClassifierFreeSamplingEmbeddings, VQDiffusionPipeline
+ _import_structure["pipeline_vq_diffusion"] = ["LearnedClassifierFreeSamplingEmbeddings", "VQDiffusionPipeline"]
+
+import sys
+
+
+sys.modules[__name__] = _LazyModule(
+ __name__,
+ globals()["__file__"],
+ _import_structure,
+ module_spec=__spec__,
+)
diff --git a/src/diffusers/pipelines/wuerstchen/__init__.py b/src/diffusers/pipelines/wuerstchen/__init__.py
index a6f6321b048a..f77b597a0b92 100644
--- a/src/diffusers/pipelines/wuerstchen/__init__.py
+++ b/src/diffusers/pipelines/wuerstchen/__init__.py
@@ -1,10 +1,38 @@
-from ...utils import is_torch_available, is_transformers_available
+from ...utils import (
+ OptionalDependencyNotAvailable,
+ _LazyModule,
+ get_objects_from_module,
+ is_torch_available,
+ is_transformers_available,
+)
-if is_transformers_available() and is_torch_available():
- from .modeling_paella_vq_model import PaellaVQModel
- from .modeling_wuerstchen_diffnext import WuerstchenDiffNeXt
- from .modeling_wuerstchen_prior import WuerstchenPrior
- from .pipeline_wuerstchen import WuerstchenDecoderPipeline
- from .pipeline_wuerstchen_combined import WuerstchenCombinedPipeline
- from .pipeline_wuerstchen_prior import WuerstchenPriorPipeline
+_import_structure = {}
+_dummy_objects = {}
+try:
+ if not (is_transformers_available() and is_torch_available()):
+ raise OptionalDependencyNotAvailable()
+
+except OptionalDependencyNotAvailable:
+ from ...utils import dummy_torch_and_transformers_objects
+
+ _dummy_objects.update(get_objects_from_module(dummy_torch_and_transformers_objects))
+
+else:
+ _import_structure["modeling_paella_vq_model"] = ["PaellaVQModel"]
+ _import_structure["modeling_wuerstchen_diffnext"] = ["WuerstchenDiffNeXt"]
+ _import_structure["modeling_wuerstchen_prior"] = ["WuerstchenPrior"]
+ _import_structure["pipeline_wuerstchen"] = ["WuerstchenDecoderPipeline"]
+ _import_structure["pipeline_wuerstchen_combined"] = ["WuerstchenCombinedPipeline"]
+ _import_structure["pipeline_wuerstchen_prior"] = ["WuerstchenPriorPipeline"]
+
+
+import sys
+
+
+sys.modules[__name__] = _LazyModule(
+ __name__,
+ globals()["__file__"],
+ _import_structure,
+ module_spec=__spec__,
+)
diff --git a/src/diffusers/pipelines/wuerstchen/modeling_paella_vq_model.py b/src/diffusers/pipelines/wuerstchen/modeling_paella_vq_model.py
index 09bdd16592df..7ee42faa0e82 100644
--- a/src/diffusers/pipelines/wuerstchen/modeling_paella_vq_model.py
+++ b/src/diffusers/pipelines/wuerstchen/modeling_paella_vq_model.py
@@ -22,7 +22,7 @@
from ...models.modeling_utils import ModelMixin
from ...models.vae import DecoderOutput, VectorQuantizer
from ...models.vq_model import VQEncoderOutput
-from ...utils import apply_forward_hook
+from ...utils.accelerate_utils import apply_forward_hook
class MixingResidualBlock(nn.Module):
diff --git a/src/diffusers/pipelines/wuerstchen/pipeline_wuerstchen.py b/src/diffusers/pipelines/wuerstchen/pipeline_wuerstchen.py
index 78aeebed7943..7f6b0546da7b 100644
--- a/src/diffusers/pipelines/wuerstchen/pipeline_wuerstchen.py
+++ b/src/diffusers/pipelines/wuerstchen/pipeline_wuerstchen.py
@@ -19,7 +19,8 @@
from transformers import CLIPTextModel, CLIPTokenizer
from ...schedulers import DDPMWuerstchenScheduler
-from ...utils import is_accelerate_available, is_accelerate_version, logging, randn_tensor, replace_example_docstring
+from ...utils import is_accelerate_available, is_accelerate_version, logging, replace_example_docstring
+from ...utils.torch_utils import randn_tensor
from ..pipeline_utils import DiffusionPipeline, ImagePipelineOutput
from .modeling_paella_vq_model import PaellaVQModel
from .modeling_wuerstchen_diffnext import WuerstchenDiffNeXt
diff --git a/src/diffusers/pipelines/wuerstchen/pipeline_wuerstchen_prior.py b/src/diffusers/pipelines/wuerstchen/pipeline_wuerstchen_prior.py
index 8b13d8fdf2b7..297462bd96f7 100644
--- a/src/diffusers/pipelines/wuerstchen/pipeline_wuerstchen_prior.py
+++ b/src/diffusers/pipelines/wuerstchen/pipeline_wuerstchen_prior.py
@@ -26,9 +26,9 @@
is_accelerate_available,
is_accelerate_version,
logging,
- randn_tensor,
replace_example_docstring,
)
+from ...utils.torch_utils import randn_tensor
from ..pipeline_utils import DiffusionPipeline
from .modeling_wuerstchen_prior import WuerstchenPrior
diff --git a/src/diffusers/schedulers/__init__.py b/src/diffusers/schedulers/__init__.py
index 84df4ffb84db..270e10cdbe18 100644
--- a/src/diffusers/schedulers/__init__.py
+++ b/src/diffusers/schedulers/__init__.py
@@ -15,6 +15,7 @@
from ..utils import (
OptionalDependencyNotAvailable,
+ _LazyModule,
is_flax_available,
is_scipy_available,
is_torch_available,
@@ -22,38 +23,49 @@
)
+_import_structure = {}
+_dummy_modules = {}
+
try:
if not is_torch_available():
raise OptionalDependencyNotAvailable()
except OptionalDependencyNotAvailable:
- from ..utils.dummy_pt_objects import * # noqa F403
+ from ..utils import dummy_pt_objects # noqa F403
+
+ modules = {}
+ for name in dir(dummy_pt_objects):
+ if (not name.endswith("Scheduler")) or name.startswith("_"):
+ continue
+ modules[name] = getattr(dummy_pt_objects, name)
+ _dummy_modules.update(modules)
+
else:
- from .scheduling_consistency_models import CMStochasticIterativeScheduler
- from .scheduling_ddim import DDIMScheduler
- from .scheduling_ddim_inverse import DDIMInverseScheduler
- from .scheduling_ddim_parallel import DDIMParallelScheduler
- from .scheduling_ddpm import DDPMScheduler
- from .scheduling_ddpm_parallel import DDPMParallelScheduler
- from .scheduling_ddpm_wuerstchen import DDPMWuerstchenScheduler
- from .scheduling_deis_multistep import DEISMultistepScheduler
- from .scheduling_dpmsolver_multistep import DPMSolverMultistepScheduler
- from .scheduling_dpmsolver_multistep_inverse import DPMSolverMultistepInverseScheduler
- from .scheduling_dpmsolver_singlestep import DPMSolverSinglestepScheduler
- from .scheduling_euler_ancestral_discrete import EulerAncestralDiscreteScheduler
- from .scheduling_euler_discrete import EulerDiscreteScheduler
- from .scheduling_heun_discrete import HeunDiscreteScheduler
- from .scheduling_ipndm import IPNDMScheduler
- from .scheduling_k_dpm_2_ancestral_discrete import KDPM2AncestralDiscreteScheduler
- from .scheduling_k_dpm_2_discrete import KDPM2DiscreteScheduler
- from .scheduling_karras_ve import KarrasVeScheduler
- from .scheduling_pndm import PNDMScheduler
- from .scheduling_repaint import RePaintScheduler
- from .scheduling_sde_ve import ScoreSdeVeScheduler
- from .scheduling_sde_vp import ScoreSdeVpScheduler
- from .scheduling_unclip import UnCLIPScheduler
- from .scheduling_unipc_multistep import UniPCMultistepScheduler
- from .scheduling_utils import KarrasDiffusionSchedulers, SchedulerMixin
- from .scheduling_vq_diffusion import VQDiffusionScheduler
+ _import_structure["scheduling_consistency_models"] = ["CMStochasticIterativeScheduler"]
+ _import_structure["scheduling_ddim"] = ["DDIMScheduler"]
+ _import_structure["scheduling_ddim_inverse"] = ["DDIMInverseScheduler"]
+ _import_structure["scheduling_ddim_parallel"] = ["DDIMParallelScheduler"]
+ _import_structure["scheduling_ddpm"] = ["DDPMScheduler"]
+ _import_structure["scheduling_ddpm_parallel"] = ["DDPMParallelScheduler"]
+ _import_structure["scheduling_deis_multistep"] = ["DEISMultistepScheduler"]
+ _import_structure["scheduling_dpmsolver_multistep"] = ["DPMSolverMultistepScheduler"]
+ _import_structure["scheduling_dpmsolver_multistep_inverse"] = ["DPMSolverMultistepInverseScheduler"]
+ _import_structure["scheduling_dpmsolver_singlestep"] = ["DPMSolverSinglestepScheduler"]
+ _import_structure["scheduling_euler_ancestral_discrete"] = ["EulerAncestralDiscreteScheduler"]
+ _import_structure["scheduling_euler_discrete"] = ["EulerDiscreteScheduler"]
+ _import_structure["scheduling_heun_discrete"] = ["HeunDiscreteScheduler"]
+ _import_structure["scheduling_ipndm"] = ["IPNDMScheduler"]
+ _import_structure["scheduling_k_dpm_2_ancestral_discrete"] = ["KDPM2AncestralDiscreteScheduler"]
+ _import_structure["scheduling_k_dpm_2_discrete"] = ["KDPM2DiscreteScheduler"]
+ _import_structure["scheduling_karras_ve"] = ["KarrasVeScheduler"]
+ _import_structure["scheduling_pndm"] = ["PNDMScheduler"]
+ _import_structure["scheduling_repaint"] = ["RePaintScheduler"]
+ _import_structure["scheduling_sde_ve"] = ["ScoreSdeVeScheduler"]
+ _import_structure["scheduling_sde_vp"] = ["ScoreSdeVpScheduler"]
+ _import_structure["scheduling_unclip"] = ["UnCLIPScheduler"]
+ _import_structure["scheduling_unipc_multistep"] = ["UniPCMultistepScheduler"]
+ _import_structure["scheduling_utils"] = ["KarrasDiffusionSchedulers", "SchedulerMixin"]
+ _import_structure["scheduling_vq_diffusion"] = ["VQDiffusionScheduler"]
+ _import_structure["scheduling_ddpm_wuerstchen"] = ["DDPMWuerstchenScheduler"]
try:
if not is_flax_available():
@@ -61,33 +73,59 @@
except OptionalDependencyNotAvailable:
from ..utils.dummy_flax_objects import * # noqa F403
else:
- from .scheduling_ddim_flax import FlaxDDIMScheduler
- from .scheduling_ddpm_flax import FlaxDDPMScheduler
- from .scheduling_dpmsolver_multistep_flax import FlaxDPMSolverMultistepScheduler
- from .scheduling_karras_ve_flax import FlaxKarrasVeScheduler
- from .scheduling_lms_discrete_flax import FlaxLMSDiscreteScheduler
- from .scheduling_pndm_flax import FlaxPNDMScheduler
- from .scheduling_sde_ve_flax import FlaxScoreSdeVeScheduler
- from .scheduling_utils_flax import (
- FlaxKarrasDiffusionSchedulers,
- FlaxSchedulerMixin,
- FlaxSchedulerOutput,
- broadcast_to_shape_from_left,
- )
+ _import_structure["scheduling_ddim_flax"] = ["FlaxDDIMScheduler"]
+ _import_structure["scheduling_ddpm_flax"] = ["FlaxDDPMScheduler"]
+ _import_structure["scheduling_dpmsolver_multistep_flax"] = ["FlaxDPMSolverMultistepScheduler"]
+ _import_structure["scheduling_karras_ve_flax"] = ["FlaxKarrasVeScheduler"]
+ _import_structure["scheduling_lms_discrete_flax"] = ["FlaxLMSDiscreteScheduler"]
+ _import_structure["scheduling_pndm_flax"] = ["FlaxPNDMScheduler"]
+ _import_structure["scheduling_sde_ve_flax"] = ["FlaxScoreSdeVeScheduler"]
+ _import_structure["scheduling_utils_flax"] = [
+ "FlaxKarrasDiffusionSchedulers",
+ "FlaxSchedulerMixin",
+ "FlaxSchedulerOutput",
+ "broadcast_to_shape_from_left",
+ ]
try:
if not (is_torch_available() and is_scipy_available()):
raise OptionalDependencyNotAvailable()
except OptionalDependencyNotAvailable:
- from ..utils.dummy_torch_and_scipy_objects import * # noqa F403
+ from ..utils import dummy_torch_and_scipy_objects # noqa F403
+
+ modules = {}
+ for name in dir(dummy_torch_and_scipy_objects):
+ if (not name.endswith("Scheduler")) or name.startswith("_"):
+ continue
+ modules[name] = getattr(dummy_torch_and_scipy_objects, name)
+
+ _dummy_modules.update(modules)
+
else:
- from .scheduling_lms_discrete import LMSDiscreteScheduler
+ _import_structure["scheduling_lms_discrete"] = ["LMSDiscreteScheduler"]
try:
if not (is_torch_available() and is_torchsde_available()):
raise OptionalDependencyNotAvailable()
except OptionalDependencyNotAvailable:
- from ..utils.dummy_torch_and_torchsde_objects import * # noqa F403
+ from ..utils import dummy_torch_and_torchsde_objects # noqa F403
+
+ modules = {}
+ for name in dir(dummy_torch_and_torchsde_objects):
+ if (not name.endswith("Scheduler")) or name.startswith("_"):
+ continue
+ modules[name] = getattr(dummy_torch_and_torchsde_objects, name)
+
+ _dummy_modules.update(modules)
+
+
else:
- from .scheduling_dpmsolver_sde import DPMSolverSDEScheduler
+ _import_structure["scheduling_dpmsolver_sde"] = ["DPMSolverSDEScheduler"]
+
+import sys
+
+
+sys.modules[__name__] = _LazyModule(__name__, globals()["__file__"], _import_structure, module_spec=__spec__)
+for name, value in _dummy_modules.items():
+ setattr(sys.modules[__name__], name, value)
diff --git a/src/diffusers/schedulers/scheduling_consistency_models.py b/src/diffusers/schedulers/scheduling_consistency_models.py
index 735c6fc6cdd7..23cd3ec134b7 100644
--- a/src/diffusers/schedulers/scheduling_consistency_models.py
+++ b/src/diffusers/schedulers/scheduling_consistency_models.py
@@ -19,7 +19,8 @@
import torch
from ..configuration_utils import ConfigMixin, register_to_config
-from ..utils import BaseOutput, logging, randn_tensor
+from ..utils import BaseOutput, logging
+from ..utils.torch_utils import randn_tensor
from .scheduling_utils import SchedulerMixin
diff --git a/src/diffusers/schedulers/scheduling_ddim.py b/src/diffusers/schedulers/scheduling_ddim.py
index 512e449edea3..aab5255abced 100644
--- a/src/diffusers/schedulers/scheduling_ddim.py
+++ b/src/diffusers/schedulers/scheduling_ddim.py
@@ -23,7 +23,8 @@
import torch
from ..configuration_utils import ConfigMixin, register_to_config
-from ..utils import BaseOutput, randn_tensor
+from ..utils import BaseOutput
+from ..utils.torch_utils import randn_tensor
from .scheduling_utils import KarrasDiffusionSchedulers, SchedulerMixin
diff --git a/src/diffusers/schedulers/scheduling_ddim_parallel.py b/src/diffusers/schedulers/scheduling_ddim_parallel.py
index 0f1a9ebfcc43..f90a271dfc06 100644
--- a/src/diffusers/schedulers/scheduling_ddim_parallel.py
+++ b/src/diffusers/schedulers/scheduling_ddim_parallel.py
@@ -23,7 +23,8 @@
import torch
from ..configuration_utils import ConfigMixin, register_to_config
-from ..utils import BaseOutput, randn_tensor
+from ..utils import BaseOutput
+from ..utils.torch_utils import randn_tensor
from .scheduling_utils import KarrasDiffusionSchedulers, SchedulerMixin
diff --git a/src/diffusers/schedulers/scheduling_ddpm.py b/src/diffusers/schedulers/scheduling_ddpm.py
index db4ede39e2e3..86f7e84ff07f 100644
--- a/src/diffusers/schedulers/scheduling_ddpm.py
+++ b/src/diffusers/schedulers/scheduling_ddpm.py
@@ -22,7 +22,8 @@
import torch
from ..configuration_utils import ConfigMixin, register_to_config
-from ..utils import BaseOutput, randn_tensor
+from ..utils import BaseOutput
+from ..utils.torch_utils import randn_tensor
from .scheduling_utils import KarrasDiffusionSchedulers, SchedulerMixin
diff --git a/src/diffusers/schedulers/scheduling_ddpm_parallel.py b/src/diffusers/schedulers/scheduling_ddpm_parallel.py
index 7e04001987f2..2f3bdd39aaa4 100644
--- a/src/diffusers/schedulers/scheduling_ddpm_parallel.py
+++ b/src/diffusers/schedulers/scheduling_ddpm_parallel.py
@@ -22,7 +22,8 @@
import torch
from ..configuration_utils import ConfigMixin, register_to_config
-from ..utils import BaseOutput, randn_tensor
+from ..utils import BaseOutput
+from ..utils.torch_utils import randn_tensor
from .scheduling_utils import KarrasDiffusionSchedulers, SchedulerMixin
diff --git a/src/diffusers/schedulers/scheduling_ddpm_wuerstchen.py b/src/diffusers/schedulers/scheduling_ddpm_wuerstchen.py
index 28311fc03301..781efb12b18b 100644
--- a/src/diffusers/schedulers/scheduling_ddpm_wuerstchen.py
+++ b/src/diffusers/schedulers/scheduling_ddpm_wuerstchen.py
@@ -22,7 +22,8 @@
import torch
from ..configuration_utils import ConfigMixin, register_to_config
-from ..utils import BaseOutput, randn_tensor
+from ..utils import BaseOutput
+from ..utils.torch_utils import randn_tensor
from .scheduling_utils import SchedulerMixin
diff --git a/src/diffusers/schedulers/scheduling_dpmsolver_multistep.py b/src/diffusers/schedulers/scheduling_dpmsolver_multistep.py
index 8c25cdff8a07..babba2206de0 100644
--- a/src/diffusers/schedulers/scheduling_dpmsolver_multistep.py
+++ b/src/diffusers/schedulers/scheduling_dpmsolver_multistep.py
@@ -21,7 +21,7 @@
import torch
from ..configuration_utils import ConfigMixin, register_to_config
-from ..utils import randn_tensor
+from ..utils.torch_utils import randn_tensor
from .scheduling_utils import KarrasDiffusionSchedulers, SchedulerMixin, SchedulerOutput
diff --git a/src/diffusers/schedulers/scheduling_dpmsolver_multistep_inverse.py b/src/diffusers/schedulers/scheduling_dpmsolver_multistep_inverse.py
index 34639d38a6a2..33a2637d00f3 100644
--- a/src/diffusers/schedulers/scheduling_dpmsolver_multistep_inverse.py
+++ b/src/diffusers/schedulers/scheduling_dpmsolver_multistep_inverse.py
@@ -21,7 +21,7 @@
import torch
from ..configuration_utils import ConfigMixin, register_to_config
-from ..utils import randn_tensor
+from ..utils.torch_utils import randn_tensor
from .scheduling_utils import KarrasDiffusionSchedulers, SchedulerMixin, SchedulerOutput
diff --git a/src/diffusers/schedulers/scheduling_euler_ancestral_discrete.py b/src/diffusers/schedulers/scheduling_euler_ancestral_discrete.py
index a776be758189..41ef3a3f2732 100644
--- a/src/diffusers/schedulers/scheduling_euler_ancestral_discrete.py
+++ b/src/diffusers/schedulers/scheduling_euler_ancestral_discrete.py
@@ -20,7 +20,8 @@
import torch
from ..configuration_utils import ConfigMixin, register_to_config
-from ..utils import BaseOutput, logging, randn_tensor
+from ..utils import BaseOutput, logging
+from ..utils.torch_utils import randn_tensor
from .scheduling_utils import KarrasDiffusionSchedulers, SchedulerMixin
diff --git a/src/diffusers/schedulers/scheduling_euler_discrete.py b/src/diffusers/schedulers/scheduling_euler_discrete.py
index 2cc36a1718d0..0875e1af3325 100644
--- a/src/diffusers/schedulers/scheduling_euler_discrete.py
+++ b/src/diffusers/schedulers/scheduling_euler_discrete.py
@@ -20,7 +20,8 @@
import torch
from ..configuration_utils import ConfigMixin, register_to_config
-from ..utils import BaseOutput, logging, randn_tensor
+from ..utils import BaseOutput, logging
+from ..utils.torch_utils import randn_tensor
from .scheduling_utils import KarrasDiffusionSchedulers, SchedulerMixin
diff --git a/src/diffusers/schedulers/scheduling_k_dpm_2_ancestral_discrete.py b/src/diffusers/schedulers/scheduling_k_dpm_2_ancestral_discrete.py
index 0b2569a94f6c..b44ff31379ad 100644
--- a/src/diffusers/schedulers/scheduling_k_dpm_2_ancestral_discrete.py
+++ b/src/diffusers/schedulers/scheduling_k_dpm_2_ancestral_discrete.py
@@ -20,7 +20,7 @@
import torch
from ..configuration_utils import ConfigMixin, register_to_config
-from ..utils import randn_tensor
+from ..utils.torch_utils import randn_tensor
from .scheduling_utils import KarrasDiffusionSchedulers, SchedulerMixin, SchedulerOutput
diff --git a/src/diffusers/schedulers/scheduling_karras_ve.py b/src/diffusers/schedulers/scheduling_karras_ve.py
index 1f8613cfe44a..462169b633de 100644
--- a/src/diffusers/schedulers/scheduling_karras_ve.py
+++ b/src/diffusers/schedulers/scheduling_karras_ve.py
@@ -20,7 +20,8 @@
import torch
from ..configuration_utils import ConfigMixin, register_to_config
-from ..utils import BaseOutput, randn_tensor
+from ..utils import BaseOutput
+from ..utils.torch_utils import randn_tensor
from .scheduling_utils import SchedulerMixin
diff --git a/src/diffusers/schedulers/scheduling_repaint.py b/src/diffusers/schedulers/scheduling_repaint.py
index 941946efe914..733bd0a159fd 100644
--- a/src/diffusers/schedulers/scheduling_repaint.py
+++ b/src/diffusers/schedulers/scheduling_repaint.py
@@ -20,7 +20,8 @@
import torch
from ..configuration_utils import ConfigMixin, register_to_config
-from ..utils import BaseOutput, randn_tensor
+from ..utils import BaseOutput
+from ..utils.torch_utils import randn_tensor
from .scheduling_utils import SchedulerMixin
diff --git a/src/diffusers/schedulers/scheduling_sde_ve.py b/src/diffusers/schedulers/scheduling_sde_ve.py
index f1026de8f276..8b9439add3ec 100644
--- a/src/diffusers/schedulers/scheduling_sde_ve.py
+++ b/src/diffusers/schedulers/scheduling_sde_ve.py
@@ -21,7 +21,8 @@
import torch
from ..configuration_utils import ConfigMixin, register_to_config
-from ..utils import BaseOutput, randn_tensor
+from ..utils import BaseOutput
+from ..utils.torch_utils import randn_tensor
from .scheduling_utils import SchedulerMixin, SchedulerOutput
diff --git a/src/diffusers/schedulers/scheduling_sde_vp.py b/src/diffusers/schedulers/scheduling_sde_vp.py
index ff719adbbd28..b14bc867befa 100644
--- a/src/diffusers/schedulers/scheduling_sde_vp.py
+++ b/src/diffusers/schedulers/scheduling_sde_vp.py
@@ -20,7 +20,7 @@
import torch
from ..configuration_utils import ConfigMixin, register_to_config
-from ..utils import randn_tensor
+from ..utils.torch_utils import randn_tensor
from .scheduling_utils import SchedulerMixin
diff --git a/src/diffusers/schedulers/scheduling_unclip.py b/src/diffusers/schedulers/scheduling_unclip.py
index 844e552c0fb4..2f5b17815dd6 100644
--- a/src/diffusers/schedulers/scheduling_unclip.py
+++ b/src/diffusers/schedulers/scheduling_unclip.py
@@ -20,7 +20,8 @@
import torch
from ..configuration_utils import ConfigMixin, register_to_config
-from ..utils import BaseOutput, randn_tensor
+from ..utils import BaseOutput
+from ..utils.torch_utils import randn_tensor
from .scheduling_utils import SchedulerMixin
diff --git a/src/diffusers/utils/__init__.py b/src/diffusers/utils/__init__.py
index 9b710d214d92..a846f6caef08 100644
--- a/src/diffusers/utils/__init__.py
+++ b/src/diffusers/utils/__init__.py
@@ -18,7 +18,6 @@
from packaging import version
from .. import __version__
-from .accelerate_utils import apply_forward_hook
from .constants import (
CONFIG_NAME,
DEPRECATED_REVISION_ARGS,
@@ -35,6 +34,7 @@
from .deprecation_utils import deprecate
from .doc_utils import replace_example_docstring
from .dynamic_modules_utils import get_class_from_dynamic_module
+from .export_utils import export_to_gif, export_to_obj, export_to_ply, export_to_video
from .hub_utils import (
HF_HUB_OFFLINE,
PushToHubMixin,
@@ -52,6 +52,8 @@
USE_TORCH,
DummyObject,
OptionalDependencyNotAvailable,
+ _LazyModule,
+ get_objects_from_module,
is_accelerate_available,
is_accelerate_version,
is_bs4_available,
@@ -78,32 +80,10 @@
is_xformers_available,
requires_backends,
)
+from .loading_utils import load_image
from .logging import get_logger
from .outputs import BaseOutput
from .pil_utils import PIL_INTERPOLATION, make_image_grid, numpy_to_pil, pt_to_pil
-from .torch_utils import is_compiled_module, randn_tensor
-
-
-if is_torch_available():
- from .testing_utils import (
- floats_tensor,
- load_hf_numpy,
- load_image,
- load_numpy,
- load_pt,
- nightly,
- parse_flag_from_env,
- print_tensor_test,
- require_torch_2,
- require_torch_gpu,
- skip_mps,
- slow,
- torch_all_close,
- torch_device,
- )
- from .torch_utils import maybe_allow_in_graph
-
-from .testing_utils import export_to_gif, export_to_obj, export_to_ply, export_to_video
logger = get_logger(__name__)
diff --git a/src/diffusers/utils/export_utils.py b/src/diffusers/utils/export_utils.py
new file mode 100644
index 000000000000..f7744f9d63eb
--- /dev/null
+++ b/src/diffusers/utils/export_utils.py
@@ -0,0 +1,132 @@
+import io
+import random
+import struct
+import tempfile
+from contextlib import contextmanager
+from typing import List
+
+import numpy as np
+import PIL.Image
+import PIL.ImageOps
+
+from .import_utils import (
+ BACKENDS_MAPPING,
+ is_opencv_available,
+)
+from .logging import get_logger
+
+
+global_rng = random.Random()
+
+logger = get_logger(__name__)
+
+
+@contextmanager
+def buffered_writer(raw_f):
+ f = io.BufferedWriter(raw_f)
+ yield f
+ f.flush()
+
+
+def export_to_gif(image: List[PIL.Image.Image], output_gif_path: str = None) -> str:
+ if output_gif_path is None:
+ output_gif_path = tempfile.NamedTemporaryFile(suffix=".gif").name
+
+ image[0].save(
+ output_gif_path,
+ save_all=True,
+ append_images=image[1:],
+ optimize=False,
+ duration=100,
+ loop=0,
+ )
+ return output_gif_path
+
+
+def export_to_ply(mesh, output_ply_path: str = None):
+ """
+ Write a PLY file for a mesh.
+ """
+ if output_ply_path is None:
+ output_ply_path = tempfile.NamedTemporaryFile(suffix=".ply").name
+
+ coords = mesh.verts.detach().cpu().numpy()
+ faces = mesh.faces.cpu().numpy()
+ rgb = np.stack([mesh.vertex_channels[x].detach().cpu().numpy() for x in "RGB"], axis=1)
+
+ with buffered_writer(open(output_ply_path, "wb")) as f:
+ f.write(b"ply\n")
+ f.write(b"format binary_little_endian 1.0\n")
+ f.write(bytes(f"element vertex {len(coords)}\n", "ascii"))
+ f.write(b"property float x\n")
+ f.write(b"property float y\n")
+ f.write(b"property float z\n")
+ if rgb is not None:
+ f.write(b"property uchar red\n")
+ f.write(b"property uchar green\n")
+ f.write(b"property uchar blue\n")
+ if faces is not None:
+ f.write(bytes(f"element face {len(faces)}\n", "ascii"))
+ f.write(b"property list uchar int vertex_index\n")
+ f.write(b"end_header\n")
+
+ if rgb is not None:
+ rgb = (rgb * 255.499).round().astype(int)
+ vertices = [
+ (*coord, *rgb)
+ for coord, rgb in zip(
+ coords.tolist(),
+ rgb.tolist(),
+ )
+ ]
+ format = struct.Struct("<3f3B")
+ for item in vertices:
+ f.write(format.pack(*item))
+ else:
+ format = struct.Struct("<3f")
+ for vertex in coords.tolist():
+ f.write(format.pack(*vertex))
+
+ if faces is not None:
+ format = struct.Struct(" str:
+ if is_opencv_available():
+ import cv2
+ else:
+ raise ImportError(BACKENDS_MAPPING["opencv"][1].format("export_to_video"))
+ if output_video_path is None:
+ output_video_path = tempfile.NamedTemporaryFile(suffix=".mp4").name
+
+ fourcc = cv2.VideoWriter_fourcc(*"mp4v")
+ h, w, c = video_frames[0].shape
+ video_writer = cv2.VideoWriter(output_video_path, fourcc, fps=8, frameSize=(w, h))
+ for i in range(len(video_frames)):
+ img = cv2.cvtColor(video_frames[i], cv2.COLOR_RGB2BGR)
+ video_writer.write(img)
+ return output_video_path
diff --git a/src/diffusers/utils/import_utils.py b/src/diffusers/utils/import_utils.py
index 7fe5eacb25b0..1cf319e2941b 100644
--- a/src/diffusers/utils/import_utils.py
+++ b/src/diffusers/utils/import_utils.py
@@ -19,7 +19,9 @@
import os
import sys
from collections import OrderedDict
-from typing import Union
+from itertools import chain
+from types import ModuleType
+from typing import Any, Union
from huggingface_hub.utils import is_jinja_available # noqa: F401
from packaging import version
@@ -219,10 +221,10 @@
try:
_xformers_version = importlib_metadata.version("xformers")
if _torch_available:
- import torch
+ _torch_version = importlib_metadata.version("torch")
+ if version.Version(_torch_version) < version.Version("1.12"):
+ raise ValueError("xformers is installed in your environment and requires PyTorch >= 1.12")
- if version.Version(torch.__version__) < version.Version("1.12"):
- raise ValueError("PyTorch should be >= 1.12")
logger.debug(f"Successfully imported xformers version {_xformers_version}")
except importlib_metadata.PackageNotFoundError:
_xformers_available = False
@@ -647,5 +649,85 @@ def is_k_diffusion_version(operation: str, version: str):
return compare_versions(parse(_k_diffusion_version), operation, version)
+def get_objects_from_module(module):
+ """
+ Args:
+ Returns a dict of object names and values in a module, while skipping private/internal objects
+ module (ModuleType):
+ Module to extract the objects from.
+
+ Returns:
+ dict: Dictionary of object names and corresponding values
+ """
+
+ objects = {}
+ for name in dir(module):
+ if name.startswith("_"):
+ continue
+ objects[name] = getattr(module, name)
+
+ return objects
+
+
class OptionalDependencyNotAvailable(BaseException):
"""An error indicating that an optional dependency of Diffusers was not found in the environment."""
+
+
+class _LazyModule(ModuleType):
+ """
+ Module class that surfaces all objects but only performs associated imports when the objects are requested.
+ """
+
+ # Very heavily inspired by optuna.integration._IntegrationModule
+ # https://github.com/optuna/optuna/blob/master/optuna/integration/__init__.py
+ def __init__(self, name, module_file, import_structure, module_spec=None, extra_objects=None):
+ super().__init__(name)
+ self._modules = set(import_structure.keys())
+ self._class_to_module = {}
+ for key, values in import_structure.items():
+ for value in values:
+ self._class_to_module[value] = key
+ # Needed for autocompletion in an IDE
+ self.__all__ = list(import_structure.keys()) + list(chain(*import_structure.values()))
+ self.__file__ = module_file
+ self.__spec__ = module_spec
+ self.__path__ = [os.path.dirname(module_file)]
+ self._objects = {} if extra_objects is None else extra_objects
+ self._name = name
+ self._import_structure = import_structure
+
+ # Needed for autocompletion in an IDE
+ def __dir__(self):
+ result = super().__dir__()
+ # The elements of self.__all__ that are submodules may or may not be in the dir already, depending on whether
+ # they have been accessed or not. So we only add the elements of self.__all__ that are not already in the dir.
+ for attr in self.__all__:
+ if attr not in result:
+ result.append(attr)
+ return result
+
+ def __getattr__(self, name: str) -> Any:
+ if name in self._objects:
+ return self._objects[name]
+ if name in self._modules:
+ value = self._get_module(name)
+ elif name in self._class_to_module.keys():
+ module = self._get_module(self._class_to_module[name])
+ value = getattr(module, name)
+ else:
+ raise AttributeError(f"module {self.__name__} has no attribute {name}")
+
+ setattr(self, name, value)
+ return value
+
+ def _get_module(self, module_name: str):
+ try:
+ return importlib.import_module("." + module_name, self.__name__)
+ except Exception as e:
+ raise RuntimeError(
+ f"Failed to import {self.__name__}.{module_name} because of the following error (look up to see its"
+ f" traceback):\n{e}"
+ ) from e
+
+ def __reduce__(self):
+ return (self.__class__, (self._name, self.__file__, self._import_structure))
diff --git a/src/diffusers/utils/loading_utils.py b/src/diffusers/utils/loading_utils.py
new file mode 100644
index 000000000000..279aa6fe737b
--- /dev/null
+++ b/src/diffusers/utils/loading_utils.py
@@ -0,0 +1,37 @@
+import os
+from typing import Union
+
+import PIL.Image
+import PIL.ImageOps
+import requests
+
+
+def load_image(image: Union[str, PIL.Image.Image]) -> PIL.Image.Image:
+ """
+ Loads `image` to a PIL Image.
+
+ Args:
+ image (`str` or `PIL.Image.Image`):
+ The image to convert to the PIL Image format.
+ Returns:
+ `PIL.Image.Image`:
+ A PIL Image.
+ """
+ if isinstance(image, str):
+ if image.startswith("http://") or image.startswith("https://"):
+ image = PIL.Image.open(requests.get(image, stream=True).raw)
+ elif os.path.isfile(image):
+ image = PIL.Image.open(image)
+ else:
+ raise ValueError(
+ f"Incorrect path or url, URLs must start with `http://` or `https://`, and {image} is not a valid path"
+ )
+ elif isinstance(image, PIL.Image.Image):
+ image = image
+ else:
+ raise ValueError(
+ "Incorrect format used for image. Should be an url linking to an image, a local path, or a PIL image."
+ )
+ image = PIL.ImageOps.exif_transpose(image)
+ image = image.convert("RGB")
+ return image
diff --git a/tests/models/test_layers_utils.py b/tests/models/test_layers_utils.py
index 40627cc93caa..9d45d810f653 100644
--- a/tests/models/test_layers_utils.py
+++ b/tests/models/test_layers_utils.py
@@ -25,7 +25,7 @@
from diffusers.models.lora import LoRACompatibleLinear
from diffusers.models.resnet import Downsample2D, ResnetBlock2D, Upsample2D
from diffusers.models.transformer_2d import Transformer2DModel
-from diffusers.utils import torch_device
+from diffusers.utils.testing_utils import torch_device
class EmbeddingsTests(unittest.TestCase):
diff --git a/tests/models/test_lora_layers.py b/tests/models/test_lora_layers.py
index c49ea7f2d960..1d846b6cdb3f 100644
--- a/tests/models/test_lora_layers.py
+++ b/tests/models/test_lora_layers.py
@@ -43,8 +43,7 @@
LoRAAttnProcessor2_0,
XFormersAttnProcessor,
)
-from diffusers.utils import floats_tensor, torch_device
-from diffusers.utils.testing_utils import require_torch_gpu, slow
+from diffusers.utils.testing_utils import floats_tensor, require_torch_gpu, slow, torch_device
def create_unet_lora_layers(unet: nn.Module):
diff --git a/tests/models/test_modeling_common.py b/tests/models/test_modeling_common.py
index d071bc3ccb60..921f67410032 100644
--- a/tests/models/test_modeling_common.py
+++ b/tests/models/test_modeling_common.py
@@ -30,12 +30,13 @@
from diffusers.models import UNet2DConditionModel
from diffusers.models.attention_processor import AttnProcessor, AttnProcessor2_0, XFormersAttnProcessor
from diffusers.training_utils import EMAModel
-from diffusers.utils import logging, torch_device
+from diffusers.utils import logging
from diffusers.utils.testing_utils import (
CaptureLogger,
require_torch_2,
require_torch_gpu,
run_test_in_subprocess,
+ torch_device,
)
from ..others.test_utils import TOKEN, USER, is_staging_test
diff --git a/tests/models/test_models_prior.py b/tests/models/test_models_prior.py
index 25b9768ee34f..4c47a44ef52a 100644
--- a/tests/models/test_models_prior.py
+++ b/tests/models/test_models_prior.py
@@ -21,8 +21,7 @@
from parameterized import parameterized
from diffusers import PriorTransformer
-from diffusers.utils import floats_tensor, slow, torch_all_close, torch_device
-from diffusers.utils.testing_utils import enable_full_determinism
+from diffusers.utils.testing_utils import enable_full_determinism, floats_tensor, slow, torch_all_close, torch_device
from .test_modeling_common import ModelTesterMixin
diff --git a/tests/models/test_models_unet_1d.py b/tests/models/test_models_unet_1d.py
index 1b58f9e616be..5803e5bfda2a 100644
--- a/tests/models/test_models_unet_1d.py
+++ b/tests/models/test_models_unet_1d.py
@@ -18,7 +18,7 @@
import torch
from diffusers import UNet1DModel
-from diffusers.utils import floats_tensor, slow, torch_device
+from diffusers.utils.testing_utils import floats_tensor, slow, torch_device
from .test_modeling_common import ModelTesterMixin, UNetTesterMixin
diff --git a/tests/models/test_models_unet_2d.py b/tests/models/test_models_unet_2d.py
index 5019c7eb2740..c5289a54b4bc 100644
--- a/tests/models/test_models_unet_2d.py
+++ b/tests/models/test_models_unet_2d.py
@@ -20,8 +20,14 @@
import torch
from diffusers import UNet2DModel
-from diffusers.utils import floats_tensor, logging, slow, torch_all_close, torch_device
-from diffusers.utils.testing_utils import enable_full_determinism
+from diffusers.utils import logging
+from diffusers.utils.testing_utils import (
+ enable_full_determinism,
+ floats_tensor,
+ slow,
+ torch_all_close,
+ torch_device,
+)
from .test_modeling_common import ModelTesterMixin, UNetTesterMixin
diff --git a/tests/models/test_models_unet_2d_condition.py b/tests/models/test_models_unet_2d_condition.py
index 85d6f48a1b95..f0f91a3a86a1 100644
--- a/tests/models/test_models_unet_2d_condition.py
+++ b/tests/models/test_models_unet_2d_condition.py
@@ -25,17 +25,17 @@
from diffusers import UNet2DConditionModel
from diffusers.models.attention_processor import CustomDiffusionAttnProcessor, LoRAAttnProcessor
-from diffusers.utils import (
+from diffusers.utils import logging
+from diffusers.utils.import_utils import is_xformers_available
+from diffusers.utils.testing_utils import (
+ enable_full_determinism,
floats_tensor,
load_hf_numpy,
- logging,
require_torch_gpu,
slow,
torch_all_close,
torch_device,
)
-from diffusers.utils.import_utils import is_xformers_available
-from diffusers.utils.testing_utils import enable_full_determinism
from .test_modeling_common import ModelTesterMixin, UNetTesterMixin
diff --git a/tests/models/test_models_unet_3d_condition.py b/tests/models/test_models_unet_3d_condition.py
index ed42c582e889..f0d6a8d72571 100644
--- a/tests/models/test_models_unet_3d_condition.py
+++ b/tests/models/test_models_unet_3d_condition.py
@@ -22,14 +22,9 @@
from diffusers.models import ModelMixin, UNet3DConditionModel
from diffusers.models.attention_processor import AttnProcessor, LoRAAttnProcessor
-from diffusers.utils import (
- floats_tensor,
- logging,
- skip_mps,
- torch_device,
-)
+from diffusers.utils import logging
from diffusers.utils.import_utils import is_xformers_available
-from diffusers.utils.testing_utils import enable_full_determinism
+from diffusers.utils.testing_utils import enable_full_determinism, floats_tensor, skip_mps, torch_device
from .test_modeling_common import ModelTesterMixin, UNetTesterMixin
diff --git a/tests/models/test_models_vae.py b/tests/models/test_models_vae.py
index fe38b4fc216d..fe2bcdb0af35 100644
--- a/tests/models/test_models_vae.py
+++ b/tests/models/test_models_vae.py
@@ -20,9 +20,16 @@
from parameterized import parameterized
from diffusers import AsymmetricAutoencoderKL, AutoencoderKL, AutoencoderTiny
-from diffusers.utils import floats_tensor, load_hf_numpy, require_torch_gpu, slow, torch_all_close, torch_device
from diffusers.utils.import_utils import is_xformers_available
-from diffusers.utils.testing_utils import enable_full_determinism
+from diffusers.utils.testing_utils import (
+ enable_full_determinism,
+ floats_tensor,
+ load_hf_numpy,
+ require_torch_gpu,
+ slow,
+ torch_all_close,
+ torch_device,
+)
from .test_modeling_common import ModelTesterMixin, UNetTesterMixin
diff --git a/tests/models/test_models_vq.py b/tests/models/test_models_vq.py
index 5706c13a0c45..c7b9363b5d5f 100644
--- a/tests/models/test_models_vq.py
+++ b/tests/models/test_models_vq.py
@@ -18,8 +18,7 @@
import torch
from diffusers import VQModel
-from diffusers.utils import floats_tensor, torch_device
-from diffusers.utils.testing_utils import enable_full_determinism
+from diffusers.utils.testing_utils import enable_full_determinism, floats_tensor, torch_device
from .test_modeling_common import ModelTesterMixin, UNetTesterMixin
diff --git a/tests/models/test_unet_2d_blocks.py b/tests/models/test_unet_2d_blocks.py
index 4d658f282932..d714b9384860 100644
--- a/tests/models/test_unet_2d_blocks.py
+++ b/tests/models/test_unet_2d_blocks.py
@@ -15,7 +15,7 @@
import unittest
from diffusers.models.unet_2d_blocks import * # noqa F403
-from diffusers.utils import torch_device
+from diffusers.utils.testing_utils import torch_device
from .test_unet_blocks_common import UNetBlockTesterMixin
diff --git a/tests/models/test_unet_blocks_common.py b/tests/models/test_unet_blocks_common.py
index 17b7f65d6da3..4c399fdb74fa 100644
--- a/tests/models/test_unet_blocks_common.py
+++ b/tests/models/test_unet_blocks_common.py
@@ -17,8 +17,8 @@
import torch
-from diffusers.utils import floats_tensor, randn_tensor, torch_all_close, torch_device
-from diffusers.utils.testing_utils import require_torch
+from diffusers.utils.testing_utils import floats_tensor, require_torch, torch_all_close, torch_device
+from diffusers.utils.torch_utils import randn_tensor
@require_torch
diff --git a/tests/pipelines/altdiffusion/test_alt_diffusion.py b/tests/pipelines/altdiffusion/test_alt_diffusion.py
index 81ec00940c12..da5eb34fe92f 100644
--- a/tests/pipelines/altdiffusion/test_alt_diffusion.py
+++ b/tests/pipelines/altdiffusion/test_alt_diffusion.py
@@ -25,8 +25,7 @@
RobertaSeriesConfig,
RobertaSeriesModelWithTransformation,
)
-from diffusers.utils import nightly, torch_device
-from diffusers.utils.testing_utils import enable_full_determinism, require_torch_gpu
+from diffusers.utils.testing_utils import enable_full_determinism, nightly, require_torch_gpu, torch_device
from ..pipeline_params import TEXT_TO_IMAGE_BATCH_PARAMS, TEXT_TO_IMAGE_IMAGE_PARAMS, TEXT_TO_IMAGE_PARAMS
from ..test_pipelines_common import PipelineKarrasSchedulerTesterMixin, PipelineLatentTesterMixin, PipelineTesterMixin
diff --git a/tests/pipelines/altdiffusion/test_alt_diffusion_img2img.py b/tests/pipelines/altdiffusion/test_alt_diffusion_img2img.py
index 9bef75f4fff5..57001f7bea52 100644
--- a/tests/pipelines/altdiffusion/test_alt_diffusion_img2img.py
+++ b/tests/pipelines/altdiffusion/test_alt_diffusion_img2img.py
@@ -32,8 +32,15 @@
RobertaSeriesConfig,
RobertaSeriesModelWithTransformation,
)
-from diffusers.utils import floats_tensor, load_image, load_numpy, nightly, torch_device
-from diffusers.utils.testing_utils import enable_full_determinism, require_torch_gpu
+from diffusers.utils import load_image
+from diffusers.utils.testing_utils import (
+ enable_full_determinism,
+ floats_tensor,
+ load_numpy,
+ nightly,
+ require_torch_gpu,
+ torch_device,
+)
enable_full_determinism()
diff --git a/tests/pipelines/audio_diffusion/test_audio_diffusion.py b/tests/pipelines/audio_diffusion/test_audio_diffusion.py
index d2b110adb00d..271e458bf565 100644
--- a/tests/pipelines/audio_diffusion/test_audio_diffusion.py
+++ b/tests/pipelines/audio_diffusion/test_audio_diffusion.py
@@ -29,8 +29,7 @@
UNet2DConditionModel,
UNet2DModel,
)
-from diffusers.utils import nightly, slow, torch_device
-from diffusers.utils.testing_utils import enable_full_determinism, require_torch_gpu
+from diffusers.utils.testing_utils import enable_full_determinism, nightly, require_torch_gpu, slow, torch_device
enable_full_determinism()
diff --git a/tests/pipelines/audioldm/test_audioldm.py b/tests/pipelines/audioldm/test_audioldm.py
index 0165d3f5edda..516cea76b742 100644
--- a/tests/pipelines/audioldm/test_audioldm.py
+++ b/tests/pipelines/audioldm/test_audioldm.py
@@ -36,8 +36,8 @@
PNDMScheduler,
UNet2DConditionModel,
)
-from diffusers.utils import is_xformers_available, nightly, slow, torch_device
-from diffusers.utils.testing_utils import enable_full_determinism
+from diffusers.utils import is_xformers_available
+from diffusers.utils.testing_utils import enable_full_determinism, nightly, slow, torch_device
from ..pipeline_params import TEXT_TO_AUDIO_BATCH_PARAMS, TEXT_TO_AUDIO_PARAMS
from ..test_pipelines_common import PipelineTesterMixin
diff --git a/tests/pipelines/audioldm2/test_audioldm2.py b/tests/pipelines/audioldm2/test_audioldm2.py
index 942aec70d7cb..b37fe4dcec48 100644
--- a/tests/pipelines/audioldm2/test_audioldm2.py
+++ b/tests/pipelines/audioldm2/test_audioldm2.py
@@ -44,8 +44,8 @@
LMSDiscreteScheduler,
PNDMScheduler,
)
-from diffusers.utils import is_accelerate_available, is_accelerate_version, is_xformers_available, slow, torch_device
-from diffusers.utils.testing_utils import enable_full_determinism
+from diffusers.utils import is_accelerate_available, is_accelerate_version, is_xformers_available
+from diffusers.utils.testing_utils import enable_full_determinism, slow, torch_device
from ..pipeline_params import TEXT_TO_AUDIO_BATCH_PARAMS, TEXT_TO_AUDIO_PARAMS
from ..test_pipelines_common import PipelineTesterMixin
diff --git a/tests/pipelines/consistency_models/test_consistency_models.py b/tests/pipelines/consistency_models/test_consistency_models.py
index dfb19755d879..6732d5228d50 100644
--- a/tests/pipelines/consistency_models/test_consistency_models.py
+++ b/tests/pipelines/consistency_models/test_consistency_models.py
@@ -10,8 +10,14 @@
ConsistencyModelPipeline,
UNet2DModel,
)
-from diffusers.utils import nightly, randn_tensor, torch_device
-from diffusers.utils.testing_utils import enable_full_determinism, require_torch_2, require_torch_gpu
+from diffusers.utils.testing_utils import (
+ enable_full_determinism,
+ nightly,
+ require_torch_2,
+ require_torch_gpu,
+ torch_device,
+)
+from diffusers.utils.torch_utils import randn_tensor
from ..pipeline_params import UNCONDITIONAL_IMAGE_GENERATION_BATCH_PARAMS, UNCONDITIONAL_IMAGE_GENERATION_PARAMS
from ..test_pipelines_common import PipelineTesterMixin
diff --git a/tests/pipelines/controlnet/test_controlnet.py b/tests/pipelines/controlnet/test_controlnet.py
index 62f011cce59e..3ede0f2c4271 100644
--- a/tests/pipelines/controlnet/test_controlnet.py
+++ b/tests/pipelines/controlnet/test_controlnet.py
@@ -31,14 +31,18 @@
UNet2DConditionModel,
)
from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion_controlnet import MultiControlNetModel
-from diffusers.utils import load_image, load_numpy, randn_tensor, slow, torch_device
from diffusers.utils.import_utils import is_xformers_available
from diffusers.utils.testing_utils import (
enable_full_determinism,
+ load_image,
+ load_numpy,
require_torch_2,
require_torch_gpu,
run_test_in_subprocess,
+ slow,
+ torch_device,
)
+from diffusers.utils.torch_utils import randn_tensor
from ..pipeline_params import (
IMAGE_TO_IMAGE_IMAGE_PARAMS,
diff --git a/tests/pipelines/controlnet/test_controlnet_img2img.py b/tests/pipelines/controlnet/test_controlnet_img2img.py
index 4ba1b9a09ebe..209f6d23387e 100644
--- a/tests/pipelines/controlnet/test_controlnet_img2img.py
+++ b/tests/pipelines/controlnet/test_controlnet_img2img.py
@@ -33,9 +33,17 @@
UNet2DConditionModel,
)
from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion_controlnet import MultiControlNetModel
-from diffusers.utils import floats_tensor, load_image, load_numpy, randn_tensor, slow, torch_device
+from diffusers.utils import load_image
from diffusers.utils.import_utils import is_xformers_available
-from diffusers.utils.testing_utils import enable_full_determinism, require_torch_gpu
+from diffusers.utils.testing_utils import (
+ enable_full_determinism,
+ floats_tensor,
+ load_numpy,
+ require_torch_gpu,
+ slow,
+ torch_device,
+)
+from diffusers.utils.torch_utils import randn_tensor
from ..pipeline_params import (
IMAGE_TO_IMAGE_IMAGE_PARAMS,
diff --git a/tests/pipelines/controlnet/test_controlnet_inpaint.py b/tests/pipelines/controlnet/test_controlnet_inpaint.py
index 07519595c49e..abaa6d37b922 100644
--- a/tests/pipelines/controlnet/test_controlnet_inpaint.py
+++ b/tests/pipelines/controlnet/test_controlnet_inpaint.py
@@ -33,9 +33,17 @@
UNet2DConditionModel,
)
from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion_controlnet import MultiControlNetModel
-from diffusers.utils import floats_tensor, load_image, load_numpy, randn_tensor, slow, torch_device
+from diffusers.utils import load_image
from diffusers.utils.import_utils import is_xformers_available
-from diffusers.utils.testing_utils import enable_full_determinism, require_torch_gpu
+from diffusers.utils.testing_utils import (
+ enable_full_determinism,
+ floats_tensor,
+ load_numpy,
+ require_torch_gpu,
+ slow,
+ torch_device,
+)
+from diffusers.utils.torch_utils import randn_tensor
from ..pipeline_params import (
TEXT_GUIDED_IMAGE_INPAINTING_BATCH_PARAMS,
diff --git a/tests/pipelines/controlnet/test_controlnet_inpaint_sdxl.py b/tests/pipelines/controlnet/test_controlnet_inpaint_sdxl.py
index 8dbfb95d0960..81c789e71260 100644
--- a/tests/pipelines/controlnet/test_controlnet_inpaint_sdxl.py
+++ b/tests/pipelines/controlnet/test_controlnet_inpaint_sdxl.py
@@ -28,9 +28,8 @@
StableDiffusionXLControlNetInpaintPipeline,
UNet2DConditionModel,
)
-from diffusers.utils import floats_tensor, torch_device
from diffusers.utils.import_utils import is_xformers_available
-from diffusers.utils.testing_utils import enable_full_determinism, require_torch_gpu
+from diffusers.utils.testing_utils import enable_full_determinism, floats_tensor, require_torch_gpu, torch_device
from ..pipeline_params import (
IMAGE_TO_IMAGE_IMAGE_PARAMS,
diff --git a/tests/pipelines/controlnet/test_controlnet_sdxl.py b/tests/pipelines/controlnet/test_controlnet_sdxl.py
index 8fb76499dc14..264b879e44be 100644
--- a/tests/pipelines/controlnet/test_controlnet_sdxl.py
+++ b/tests/pipelines/controlnet/test_controlnet_sdxl.py
@@ -28,9 +28,9 @@
UNet2DConditionModel,
)
from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion_controlnet import MultiControlNetModel
-from diffusers.utils import load_image, randn_tensor, torch_device
from diffusers.utils.import_utils import is_xformers_available
-from diffusers.utils.testing_utils import enable_full_determinism, require_torch_gpu, slow
+from diffusers.utils.testing_utils import enable_full_determinism, load_image, require_torch_gpu, slow, torch_device
+from diffusers.utils.torch_utils import randn_tensor
from ..pipeline_params import (
IMAGE_TO_IMAGE_IMAGE_PARAMS,
diff --git a/tests/pipelines/controlnet/test_controlnet_sdxl_img2img.py b/tests/pipelines/controlnet/test_controlnet_sdxl_img2img.py
index 1028e4cb2b61..ee8c479b1894 100644
--- a/tests/pipelines/controlnet/test_controlnet_sdxl_img2img.py
+++ b/tests/pipelines/controlnet/test_controlnet_sdxl_img2img.py
@@ -27,9 +27,8 @@
StableDiffusionXLControlNetImg2ImgPipeline,
UNet2DConditionModel,
)
-from diffusers.utils import floats_tensor, torch_device
from diffusers.utils.import_utils import is_xformers_available
-from diffusers.utils.testing_utils import enable_full_determinism, require_torch_gpu
+from diffusers.utils.testing_utils import enable_full_determinism, floats_tensor, require_torch_gpu, torch_device
from ..pipeline_params import (
IMAGE_TO_IMAGE_IMAGE_PARAMS,
diff --git a/tests/pipelines/controlnet/test_flax_controlnet.py b/tests/pipelines/controlnet/test_flax_controlnet.py
index 4ad75b407acc..e4d131195d6a 100644
--- a/tests/pipelines/controlnet/test_flax_controlnet.py
+++ b/tests/pipelines/controlnet/test_flax_controlnet.py
@@ -17,8 +17,8 @@
import unittest
from diffusers import FlaxControlNetModel, FlaxStableDiffusionControlNetPipeline
-from diffusers.utils import is_flax_available, load_image, slow
-from diffusers.utils.testing_utils import require_flax
+from diffusers.utils import is_flax_available, load_image
+from diffusers.utils.testing_utils import require_flax, slow
if is_flax_available():
diff --git a/tests/pipelines/dance_diffusion/test_dance_diffusion.py b/tests/pipelines/dance_diffusion/test_dance_diffusion.py
index b517b02bbabf..fa10f29ee1f6 100644
--- a/tests/pipelines/dance_diffusion/test_dance_diffusion.py
+++ b/tests/pipelines/dance_diffusion/test_dance_diffusion.py
@@ -20,8 +20,7 @@
import torch
from diffusers import DanceDiffusionPipeline, IPNDMScheduler, UNet1DModel
-from diffusers.utils import nightly, torch_device
-from diffusers.utils.testing_utils import enable_full_determinism, require_torch_gpu, skip_mps
+from diffusers.utils.testing_utils import enable_full_determinism, nightly, require_torch_gpu, skip_mps, torch_device
from ..pipeline_params import UNCONDITIONAL_AUDIO_GENERATION_BATCH_PARAMS, UNCONDITIONAL_AUDIO_GENERATION_PARAMS
from ..test_pipelines_common import PipelineTesterMixin
diff --git a/tests/pipelines/deepfloyd_if/test_if_img2img.py b/tests/pipelines/deepfloyd_if/test_if_img2img.py
index ec4598906a6f..bfb70c5c9b98 100644
--- a/tests/pipelines/deepfloyd_if/test_if_img2img.py
+++ b/tests/pipelines/deepfloyd_if/test_if_img2img.py
@@ -19,9 +19,8 @@
import torch
from diffusers import IFImg2ImgPipeline
-from diffusers.utils import floats_tensor
from diffusers.utils.import_utils import is_xformers_available
-from diffusers.utils.testing_utils import skip_mps, torch_device
+from diffusers.utils.testing_utils import floats_tensor, skip_mps, torch_device
from ..pipeline_params import (
TEXT_GUIDED_IMAGE_VARIATION_BATCH_PARAMS,
diff --git a/tests/pipelines/deepfloyd_if/test_if_img2img_superresolution.py b/tests/pipelines/deepfloyd_if/test_if_img2img_superresolution.py
index 500557108aed..f35f3e945609 100644
--- a/tests/pipelines/deepfloyd_if/test_if_img2img_superresolution.py
+++ b/tests/pipelines/deepfloyd_if/test_if_img2img_superresolution.py
@@ -19,9 +19,8 @@
import torch
from diffusers import IFImg2ImgSuperResolutionPipeline
-from diffusers.utils import floats_tensor
from diffusers.utils.import_utils import is_xformers_available
-from diffusers.utils.testing_utils import skip_mps, torch_device
+from diffusers.utils.testing_utils import floats_tensor, skip_mps, torch_device
from ..pipeline_params import TEXT_GUIDED_IMAGE_VARIATION_BATCH_PARAMS, TEXT_GUIDED_IMAGE_VARIATION_PARAMS
from ..test_pipelines_common import PipelineTesterMixin
diff --git a/tests/pipelines/deepfloyd_if/test_if_inpainting.py b/tests/pipelines/deepfloyd_if/test_if_inpainting.py
index 1317fcb64e81..68753c0ac1cd 100644
--- a/tests/pipelines/deepfloyd_if/test_if_inpainting.py
+++ b/tests/pipelines/deepfloyd_if/test_if_inpainting.py
@@ -19,9 +19,8 @@
import torch
from diffusers import IFInpaintingPipeline
-from diffusers.utils import floats_tensor
from diffusers.utils.import_utils import is_xformers_available
-from diffusers.utils.testing_utils import skip_mps, torch_device
+from diffusers.utils.testing_utils import floats_tensor, skip_mps, torch_device
from ..pipeline_params import (
TEXT_GUIDED_IMAGE_INPAINTING_BATCH_PARAMS,
diff --git a/tests/pipelines/deepfloyd_if/test_if_inpainting_superresolution.py b/tests/pipelines/deepfloyd_if/test_if_inpainting_superresolution.py
index 961a22675f33..03b92e0d783c 100644
--- a/tests/pipelines/deepfloyd_if/test_if_inpainting_superresolution.py
+++ b/tests/pipelines/deepfloyd_if/test_if_inpainting_superresolution.py
@@ -19,9 +19,8 @@
import torch
from diffusers import IFInpaintingSuperResolutionPipeline
-from diffusers.utils import floats_tensor
from diffusers.utils.import_utils import is_xformers_available
-from diffusers.utils.testing_utils import skip_mps, torch_device
+from diffusers.utils.testing_utils import floats_tensor, skip_mps, torch_device
from ..pipeline_params import (
TEXT_GUIDED_IMAGE_INPAINTING_BATCH_PARAMS,
diff --git a/tests/pipelines/deepfloyd_if/test_if_superresolution.py b/tests/pipelines/deepfloyd_if/test_if_superresolution.py
index 52fb38308892..5a74148e6661 100644
--- a/tests/pipelines/deepfloyd_if/test_if_superresolution.py
+++ b/tests/pipelines/deepfloyd_if/test_if_superresolution.py
@@ -19,9 +19,8 @@
import torch
from diffusers import IFSuperResolutionPipeline
-from diffusers.utils import floats_tensor
from diffusers.utils.import_utils import is_xformers_available
-from diffusers.utils.testing_utils import skip_mps, torch_device
+from diffusers.utils.testing_utils import floats_tensor, skip_mps, torch_device
from ..pipeline_params import TEXT_GUIDED_IMAGE_VARIATION_BATCH_PARAMS, TEXT_GUIDED_IMAGE_VARIATION_PARAMS
from ..test_pipelines_common import PipelineTesterMixin
diff --git a/tests/pipelines/dit/test_dit.py b/tests/pipelines/dit/test_dit.py
index 2f91473b070b..8f4d11ec3838 100644
--- a/tests/pipelines/dit/test_dit.py
+++ b/tests/pipelines/dit/test_dit.py
@@ -20,8 +20,8 @@
import torch
from diffusers import AutoencoderKL, DDIMScheduler, DiTPipeline, DPMSolverMultistepScheduler, Transformer2DModel
-from diffusers.utils import is_xformers_available, load_numpy, nightly, torch_device
-from diffusers.utils.testing_utils import enable_full_determinism, require_torch_gpu
+from diffusers.utils import is_xformers_available
+from diffusers.utils.testing_utils import enable_full_determinism, load_numpy, nightly, require_torch_gpu, torch_device
from ..pipeline_params import (
CLASS_CONDITIONED_IMAGE_GENERATION_BATCH_PARAMS,
diff --git a/tests/pipelines/kandinsky/test_kandinsky.py b/tests/pipelines/kandinsky/test_kandinsky.py
index 01b8a0f3eec1..dd0cc75d629a 100644
--- a/tests/pipelines/kandinsky/test_kandinsky.py
+++ b/tests/pipelines/kandinsky/test_kandinsky.py
@@ -23,8 +23,14 @@
from diffusers import DDIMScheduler, KandinskyPipeline, KandinskyPriorPipeline, UNet2DConditionModel, VQModel
from diffusers.pipelines.kandinsky.text_encoder import MCLIPConfig, MultilingualCLIP
-from diffusers.utils import floats_tensor, load_numpy, slow, torch_device
-from diffusers.utils.testing_utils import enable_full_determinism, require_torch_gpu
+from diffusers.utils.testing_utils import (
+ enable_full_determinism,
+ floats_tensor,
+ load_numpy,
+ require_torch_gpu,
+ slow,
+ torch_device,
+)
from ..test_pipelines_common import PipelineTesterMixin, assert_mean_pixel_difference
diff --git a/tests/pipelines/kandinsky/test_kandinsky_combined.py b/tests/pipelines/kandinsky/test_kandinsky_combined.py
index 7629407ab745..d2079d67b60e 100644
--- a/tests/pipelines/kandinsky/test_kandinsky_combined.py
+++ b/tests/pipelines/kandinsky/test_kandinsky_combined.py
@@ -18,8 +18,7 @@
import numpy as np
from diffusers import KandinskyCombinedPipeline, KandinskyImg2ImgCombinedPipeline, KandinskyInpaintCombinedPipeline
-from diffusers.utils import torch_device
-from diffusers.utils.testing_utils import enable_full_determinism, require_torch_gpu
+from diffusers.utils.testing_utils import enable_full_determinism, require_torch_gpu, torch_device
from ..test_pipelines_common import PipelineTesterMixin
from .test_kandinsky import Dummies
diff --git a/tests/pipelines/kandinsky/test_kandinsky_img2img.py b/tests/pipelines/kandinsky/test_kandinsky_img2img.py
index f309dec89370..d91f779d2221 100644
--- a/tests/pipelines/kandinsky/test_kandinsky_img2img.py
+++ b/tests/pipelines/kandinsky/test_kandinsky_img2img.py
@@ -31,8 +31,16 @@
VQModel,
)
from diffusers.pipelines.kandinsky.text_encoder import MCLIPConfig, MultilingualCLIP
-from diffusers.utils import floats_tensor, load_image, load_numpy, nightly, slow, torch_device
-from diffusers.utils.testing_utils import enable_full_determinism, require_torch_gpu
+from diffusers.utils.testing_utils import (
+ enable_full_determinism,
+ floats_tensor,
+ load_image,
+ load_numpy,
+ nightly,
+ require_torch_gpu,
+ slow,
+ torch_device,
+)
from ..test_pipelines_common import PipelineTesterMixin, assert_mean_pixel_difference
diff --git a/tests/pipelines/kandinsky/test_kandinsky_inpaint.py b/tests/pipelines/kandinsky/test_kandinsky_inpaint.py
index 7f1841d60807..73c4eadadd96 100644
--- a/tests/pipelines/kandinsky/test_kandinsky_inpaint.py
+++ b/tests/pipelines/kandinsky/test_kandinsky_inpaint.py
@@ -24,8 +24,15 @@
from diffusers import DDIMScheduler, KandinskyInpaintPipeline, KandinskyPriorPipeline, UNet2DConditionModel, VQModel
from diffusers.pipelines.kandinsky.text_encoder import MCLIPConfig, MultilingualCLIP
-from diffusers.utils import floats_tensor, load_image, load_numpy, slow, torch_device
-from diffusers.utils.testing_utils import enable_full_determinism, require_torch_gpu
+from diffusers.utils.testing_utils import (
+ enable_full_determinism,
+ floats_tensor,
+ load_image,
+ load_numpy,
+ require_torch_gpu,
+ slow,
+ torch_device,
+)
from ..test_pipelines_common import PipelineTesterMixin, assert_mean_pixel_difference
diff --git a/tests/pipelines/kandinsky/test_kandinsky_prior.py b/tests/pipelines/kandinsky/test_kandinsky_prior.py
index 7b1acc9fc03e..b9f78ee0e8af 100644
--- a/tests/pipelines/kandinsky/test_kandinsky_prior.py
+++ b/tests/pipelines/kandinsky/test_kandinsky_prior.py
@@ -28,8 +28,7 @@
)
from diffusers import KandinskyPriorPipeline, PriorTransformer, UnCLIPScheduler
-from diffusers.utils import torch_device
-from diffusers.utils.testing_utils import enable_full_determinism, skip_mps
+from diffusers.utils.testing_utils import enable_full_determinism, skip_mps, torch_device
from ..test_pipelines_common import PipelineTesterMixin
diff --git a/tests/pipelines/kandinsky_v22/test_kandinsky.py b/tests/pipelines/kandinsky_v22/test_kandinsky.py
index 6430a476ab98..4f18990c2c0a 100644
--- a/tests/pipelines/kandinsky_v22/test_kandinsky.py
+++ b/tests/pipelines/kandinsky_v22/test_kandinsky.py
@@ -21,8 +21,14 @@
import torch
from diffusers import DDIMScheduler, KandinskyV22Pipeline, KandinskyV22PriorPipeline, UNet2DConditionModel, VQModel
-from diffusers.utils import floats_tensor, load_numpy, slow, torch_device
-from diffusers.utils.testing_utils import enable_full_determinism, require_torch_gpu
+from diffusers.utils.testing_utils import (
+ enable_full_determinism,
+ floats_tensor,
+ load_numpy,
+ require_torch_gpu,
+ slow,
+ torch_device,
+)
from ..test_pipelines_common import PipelineTesterMixin, assert_mean_pixel_difference
diff --git a/tests/pipelines/kandinsky_v22/test_kandinsky_combined.py b/tests/pipelines/kandinsky_v22/test_kandinsky_combined.py
index 7591b2347a92..ba8888ee1fa6 100644
--- a/tests/pipelines/kandinsky_v22/test_kandinsky_combined.py
+++ b/tests/pipelines/kandinsky_v22/test_kandinsky_combined.py
@@ -22,8 +22,7 @@
KandinskyV22Img2ImgCombinedPipeline,
KandinskyV22InpaintCombinedPipeline,
)
-from diffusers.utils import torch_device
-from diffusers.utils.testing_utils import enable_full_determinism, require_torch_gpu
+from diffusers.utils.testing_utils import enable_full_determinism, require_torch_gpu, torch_device
from ..test_pipelines_common import PipelineTesterMixin
from .test_kandinsky import Dummies
diff --git a/tests/pipelines/kandinsky_v22/test_kandinsky_controlnet.py b/tests/pipelines/kandinsky_v22/test_kandinsky_controlnet.py
index a50bdb50a47b..575d0aaaa767 100644
--- a/tests/pipelines/kandinsky_v22/test_kandinsky_controlnet.py
+++ b/tests/pipelines/kandinsky_v22/test_kandinsky_controlnet.py
@@ -27,8 +27,15 @@
UNet2DConditionModel,
VQModel,
)
-from diffusers.utils import floats_tensor, load_image, load_numpy, slow, torch_device
-from diffusers.utils.testing_utils import enable_full_determinism, require_torch_gpu
+from diffusers.utils.testing_utils import (
+ enable_full_determinism,
+ floats_tensor,
+ load_image,
+ load_numpy,
+ require_torch_gpu,
+ slow,
+ torch_device,
+)
from ..test_pipelines_common import PipelineTesterMixin, assert_mean_pixel_difference
diff --git a/tests/pipelines/kandinsky_v22/test_kandinsky_controlnet_img2img.py b/tests/pipelines/kandinsky_v22/test_kandinsky_controlnet_img2img.py
index 9d0ac96888ec..17394316ce7a 100644
--- a/tests/pipelines/kandinsky_v22/test_kandinsky_controlnet_img2img.py
+++ b/tests/pipelines/kandinsky_v22/test_kandinsky_controlnet_img2img.py
@@ -28,8 +28,15 @@
UNet2DConditionModel,
VQModel,
)
-from diffusers.utils import floats_tensor, load_image, load_numpy, slow, torch_device
-from diffusers.utils.testing_utils import enable_full_determinism, require_torch_gpu
+from diffusers.utils.testing_utils import (
+ enable_full_determinism,
+ floats_tensor,
+ load_image,
+ load_numpy,
+ require_torch_gpu,
+ slow,
+ torch_device,
+)
from ..test_pipelines_common import PipelineTesterMixin, assert_mean_pixel_difference
diff --git a/tests/pipelines/kandinsky_v22/test_kandinsky_img2img.py b/tests/pipelines/kandinsky_v22/test_kandinsky_img2img.py
index 17f27d0d7804..1454b061bc90 100644
--- a/tests/pipelines/kandinsky_v22/test_kandinsky_img2img.py
+++ b/tests/pipelines/kandinsky_v22/test_kandinsky_img2img.py
@@ -28,8 +28,15 @@
UNet2DConditionModel,
VQModel,
)
-from diffusers.utils import floats_tensor, load_image, load_numpy, slow, torch_device
-from diffusers.utils.testing_utils import enable_full_determinism, require_torch_gpu
+from diffusers.utils.testing_utils import (
+ enable_full_determinism,
+ floats_tensor,
+ load_image,
+ load_numpy,
+ require_torch_gpu,
+ slow,
+ torch_device,
+)
from ..test_pipelines_common import PipelineTesterMixin, assert_mean_pixel_difference
diff --git a/tests/pipelines/kandinsky_v22/test_kandinsky_inpaint.py b/tests/pipelines/kandinsky_v22/test_kandinsky_inpaint.py
index 436c240e1ac8..d7fcf670278d 100644
--- a/tests/pipelines/kandinsky_v22/test_kandinsky_inpaint.py
+++ b/tests/pipelines/kandinsky_v22/test_kandinsky_inpaint.py
@@ -28,8 +28,15 @@
UNet2DConditionModel,
VQModel,
)
-from diffusers.utils import floats_tensor, load_image, load_numpy, slow, torch_device
-from diffusers.utils.testing_utils import enable_full_determinism, require_torch_gpu
+from diffusers.utils.testing_utils import (
+ enable_full_determinism,
+ floats_tensor,
+ load_image,
+ load_numpy,
+ require_torch_gpu,
+ slow,
+ torch_device,
+)
from ..test_pipelines_common import PipelineTesterMixin, assert_mean_pixel_difference
diff --git a/tests/pipelines/kandinsky_v22/test_kandinsky_prior.py b/tests/pipelines/kandinsky_v22/test_kandinsky_prior.py
index 3191f6a11309..317e822a465a 100644
--- a/tests/pipelines/kandinsky_v22/test_kandinsky_prior.py
+++ b/tests/pipelines/kandinsky_v22/test_kandinsky_prior.py
@@ -28,8 +28,7 @@
)
from diffusers import KandinskyV22PriorPipeline, PriorTransformer, UnCLIPScheduler
-from diffusers.utils import torch_device
-from diffusers.utils.testing_utils import enable_full_determinism, skip_mps
+from diffusers.utils.testing_utils import enable_full_determinism, skip_mps, torch_device
from ..test_pipelines_common import PipelineTesterMixin
diff --git a/tests/pipelines/kandinsky_v22/test_kandinsky_prior_emb2emb.py b/tests/pipelines/kandinsky_v22/test_kandinsky_prior_emb2emb.py
index 75d101e9c10d..f71cbfcd0b5c 100644
--- a/tests/pipelines/kandinsky_v22/test_kandinsky_prior_emb2emb.py
+++ b/tests/pipelines/kandinsky_v22/test_kandinsky_prior_emb2emb.py
@@ -30,8 +30,7 @@
)
from diffusers import KandinskyV22PriorEmb2EmbPipeline, PriorTransformer, UnCLIPScheduler
-from diffusers.utils import floats_tensor, torch_device
-from diffusers.utils.testing_utils import enable_full_determinism, skip_mps
+from diffusers.utils.testing_utils import enable_full_determinism, floats_tensor, skip_mps, torch_device
from ..test_pipelines_common import PipelineTesterMixin
diff --git a/tests/pipelines/latent_diffusion/test_latent_diffusion_superresolution.py b/tests/pipelines/latent_diffusion/test_latent_diffusion_superresolution.py
index d21ead543af8..c26a8b407b67 100644
--- a/tests/pipelines/latent_diffusion/test_latent_diffusion_superresolution.py
+++ b/tests/pipelines/latent_diffusion/test_latent_diffusion_superresolution.py
@@ -20,8 +20,15 @@
import torch
from diffusers import DDIMScheduler, LDMSuperResolutionPipeline, UNet2DModel, VQModel
-from diffusers.utils import PIL_INTERPOLATION, floats_tensor, load_image, slow, torch_device
-from diffusers.utils.testing_utils import enable_full_determinism, require_torch
+from diffusers.utils import PIL_INTERPOLATION
+from diffusers.utils.testing_utils import (
+ enable_full_determinism,
+ floats_tensor,
+ load_image,
+ require_torch,
+ slow,
+ torch_device,
+)
enable_full_determinism()
diff --git a/tests/pipelines/musicldm/test_musicldm.py b/tests/pipelines/musicldm/test_musicldm.py
index 4874bf16942d..ea4c52aee1eb 100644
--- a/tests/pipelines/musicldm/test_musicldm.py
+++ b/tests/pipelines/musicldm/test_musicldm.py
@@ -38,8 +38,8 @@
PNDMScheduler,
UNet2DConditionModel,
)
-from diffusers.utils import is_xformers_available, torch_device
-from diffusers.utils.testing_utils import enable_full_determinism, nightly, require_torch_gpu
+from diffusers.utils import is_xformers_available
+from diffusers.utils.testing_utils import enable_full_determinism, nightly, require_torch_gpu, torch_device
from ..pipeline_params import TEXT_TO_AUDIO_BATCH_PARAMS, TEXT_TO_AUDIO_PARAMS
from ..test_pipelines_common import PipelineTesterMixin
diff --git a/tests/pipelines/paint_by_example/test_paint_by_example.py b/tests/pipelines/paint_by_example/test_paint_by_example.py
index 8b5b50b9f819..3148f9483124 100644
--- a/tests/pipelines/paint_by_example/test_paint_by_example.py
+++ b/tests/pipelines/paint_by_example/test_paint_by_example.py
@@ -24,8 +24,14 @@
from diffusers import AutoencoderKL, PaintByExamplePipeline, PNDMScheduler, UNet2DConditionModel
from diffusers.pipelines.paint_by_example import PaintByExampleImageEncoder
-from diffusers.utils import floats_tensor, load_image, nightly, torch_device
-from diffusers.utils.testing_utils import enable_full_determinism, require_torch_gpu
+from diffusers.utils.testing_utils import (
+ enable_full_determinism,
+ floats_tensor,
+ load_image,
+ nightly,
+ require_torch_gpu,
+ torch_device,
+)
from ..pipeline_params import IMAGE_GUIDED_IMAGE_INPAINTING_BATCH_PARAMS, IMAGE_GUIDED_IMAGE_INPAINTING_PARAMS
from ..test_pipelines_common import PipelineTesterMixin
diff --git a/tests/pipelines/semantic_stable_diffusion/test_semantic_diffusion.py b/tests/pipelines/semantic_stable_diffusion/test_semantic_diffusion.py
index 9e810616dc56..a09d0df79094 100644
--- a/tests/pipelines/semantic_stable_diffusion/test_semantic_diffusion.py
+++ b/tests/pipelines/semantic_stable_diffusion/test_semantic_diffusion.py
@@ -24,8 +24,13 @@
from diffusers import AutoencoderKL, DDIMScheduler, LMSDiscreteScheduler, PNDMScheduler, UNet2DConditionModel
from diffusers.pipelines.semantic_stable_diffusion import SemanticStableDiffusionPipeline as StableDiffusionPipeline
-from diffusers.utils import floats_tensor, nightly, torch_device
-from diffusers.utils.testing_utils import enable_full_determinism, require_torch_gpu
+from diffusers.utils.testing_utils import (
+ enable_full_determinism,
+ floats_tensor,
+ nightly,
+ require_torch_gpu,
+ torch_device,
+)
enable_full_determinism()
diff --git a/tests/pipelines/shap_e/test_shap_e.py b/tests/pipelines/shap_e/test_shap_e.py
index 90ff37de6e9a..f3c782c14bb2 100644
--- a/tests/pipelines/shap_e/test_shap_e.py
+++ b/tests/pipelines/shap_e/test_shap_e.py
@@ -21,8 +21,7 @@
from diffusers import HeunDiscreteScheduler, PriorTransformer, ShapEPipeline
from diffusers.pipelines.shap_e import ShapERenderer
-from diffusers.utils import load_numpy, slow
-from diffusers.utils.testing_utils import require_torch_gpu, torch_device
+from diffusers.utils.testing_utils import load_numpy, require_torch_gpu, slow, torch_device
from ..test_pipelines_common import PipelineTesterMixin, assert_mean_pixel_difference
diff --git a/tests/pipelines/shap_e/test_shap_e_img2img.py b/tests/pipelines/shap_e/test_shap_e_img2img.py
index 0dffac98aa25..44597e2fe49a 100644
--- a/tests/pipelines/shap_e/test_shap_e_img2img.py
+++ b/tests/pipelines/shap_e/test_shap_e_img2img.py
@@ -22,8 +22,7 @@
from diffusers import HeunDiscreteScheduler, PriorTransformer, ShapEImg2ImgPipeline
from diffusers.pipelines.shap_e import ShapERenderer
-from diffusers.utils import floats_tensor, load_image, load_numpy, slow
-from diffusers.utils.testing_utils import require_torch_gpu, torch_device
+from diffusers.utils.testing_utils import floats_tensor, load_image, load_numpy, require_torch_gpu, slow, torch_device
from ..test_pipelines_common import PipelineTesterMixin, assert_mean_pixel_difference
diff --git a/tests/pipelines/spectrogram_diffusion/test_spectrogram_diffusion.py b/tests/pipelines/spectrogram_diffusion/test_spectrogram_diffusion.py
index e70b377e2fe0..1d00c7e963bb 100644
--- a/tests/pipelines/spectrogram_diffusion/test_spectrogram_diffusion.py
+++ b/tests/pipelines/spectrogram_diffusion/test_spectrogram_diffusion.py
@@ -21,8 +21,15 @@
from diffusers import DDPMScheduler, MidiProcessor, SpectrogramDiffusionPipeline
from diffusers.pipelines.spectrogram_diffusion import SpectrogramContEncoder, SpectrogramNotesEncoder, T5FilmDecoder
-from diffusers.utils import nightly, require_torch_gpu, skip_mps, torch_device
-from diffusers.utils.testing_utils import enable_full_determinism, require_note_seq, require_onnxruntime
+from diffusers.utils.testing_utils import (
+ enable_full_determinism,
+ nightly,
+ require_note_seq,
+ require_onnxruntime,
+ require_torch_gpu,
+ skip_mps,
+ torch_device,
+)
from ..pipeline_params import TOKENS_TO_AUDIO_GENERATION_BATCH_PARAMS, TOKENS_TO_AUDIO_GENERATION_PARAMS
from ..test_pipelines_common import PipelineTesterMixin
diff --git a/tests/pipelines/stable_diffusion/test_cycle_diffusion.py b/tests/pipelines/stable_diffusion/test_cycle_diffusion.py
index 9a54c21c0a21..27a5da556021 100644
--- a/tests/pipelines/stable_diffusion/test_cycle_diffusion.py
+++ b/tests/pipelines/stable_diffusion/test_cycle_diffusion.py
@@ -22,8 +22,16 @@
from transformers import CLIPTextConfig, CLIPTextModel, CLIPTokenizer
from diffusers import AutoencoderKL, CycleDiffusionPipeline, DDIMScheduler, UNet2DConditionModel
-from diffusers.utils import floats_tensor, load_image, load_numpy, slow, torch_device
-from diffusers.utils.testing_utils import enable_full_determinism, require_torch_gpu, skip_mps
+from diffusers.utils.testing_utils import (
+ enable_full_determinism,
+ floats_tensor,
+ load_image,
+ load_numpy,
+ require_torch_gpu,
+ skip_mps,
+ slow,
+ torch_device,
+)
from ..pipeline_params import (
IMAGE_TO_IMAGE_IMAGE_PARAMS,
diff --git a/tests/pipelines/stable_diffusion/test_onnx_stable_diffusion_img2img.py b/tests/pipelines/stable_diffusion/test_onnx_stable_diffusion_img2img.py
index 9147dc461fc5..d7d549b7b5c2 100644
--- a/tests/pipelines/stable_diffusion/test_onnx_stable_diffusion_img2img.py
+++ b/tests/pipelines/stable_diffusion/test_onnx_stable_diffusion_img2img.py
@@ -26,8 +26,8 @@
OnnxStableDiffusionImg2ImgPipeline,
PNDMScheduler,
)
-from diffusers.utils import floats_tensor
from diffusers.utils.testing_utils import (
+ floats_tensor,
is_onnx_available,
load_image,
nightly,
diff --git a/tests/pipelines/stable_diffusion/test_onnx_stable_diffusion_upscale.py b/tests/pipelines/stable_diffusion/test_onnx_stable_diffusion_upscale.py
index c65030406465..56c10adbd6ae 100644
--- a/tests/pipelines/stable_diffusion/test_onnx_stable_diffusion_upscale.py
+++ b/tests/pipelines/stable_diffusion/test_onnx_stable_diffusion_upscale.py
@@ -26,8 +26,8 @@
OnnxStableDiffusionUpscalePipeline,
PNDMScheduler,
)
-from diffusers.utils import floats_tensor
from diffusers.utils.testing_utils import (
+ floats_tensor,
is_onnx_available,
load_image,
nightly,
diff --git a/tests/pipelines/stable_diffusion/test_stable_diffusion.py b/tests/pipelines/stable_diffusion/test_stable_diffusion.py
index 31de557a0ac3..e67bfd661cc1 100644
--- a/tests/pipelines/stable_diffusion/test_stable_diffusion.py
+++ b/tests/pipelines/stable_diffusion/test_stable_diffusion.py
@@ -38,14 +38,17 @@
logging,
)
from diffusers.models.attention_processor import AttnProcessor, LoRAXFormersAttnProcessor
-from diffusers.utils import load_numpy, nightly, slow, torch_device
from diffusers.utils.testing_utils import (
CaptureLogger,
enable_full_determinism,
+ load_numpy,
+ nightly,
numpy_cosine_similarity_distance,
require_torch_2,
require_torch_gpu,
run_test_in_subprocess,
+ slow,
+ torch_device,
)
from ...models.test_lora_layers import create_unet_lora_layers
diff --git a/tests/pipelines/stable_diffusion/test_stable_diffusion_adapter.py b/tests/pipelines/stable_diffusion/test_stable_diffusion_adapter.py
index 5778e862a3b0..c0ef4ceae92c 100644
--- a/tests/pipelines/stable_diffusion/test_stable_diffusion_adapter.py
+++ b/tests/pipelines/stable_diffusion/test_stable_diffusion_adapter.py
@@ -30,9 +30,17 @@
T2IAdapter,
UNet2DConditionModel,
)
-from diffusers.utils import floats_tensor, load_image, load_numpy, logging, slow, torch_device
+from diffusers.utils import logging
from diffusers.utils.import_utils import is_xformers_available
-from diffusers.utils.testing_utils import enable_full_determinism, require_torch_gpu
+from diffusers.utils.testing_utils import (
+ enable_full_determinism,
+ floats_tensor,
+ load_image,
+ load_numpy,
+ require_torch_gpu,
+ slow,
+ torch_device,
+)
from ..pipeline_params import TEXT_GUIDED_IMAGE_VARIATION_BATCH_PARAMS, TEXT_GUIDED_IMAGE_VARIATION_PARAMS
from ..test_pipelines_common import PipelineTesterMixin, assert_mean_pixel_difference
diff --git a/tests/pipelines/stable_diffusion/test_stable_diffusion_gligen_text_image.py b/tests/pipelines/stable_diffusion/test_stable_diffusion_gligen_text_image.py
index e2b4f59dd103..4e14adc81f42 100644
--- a/tests/pipelines/stable_diffusion/test_stable_diffusion_gligen_text_image.py
+++ b/tests/pipelines/stable_diffusion/test_stable_diffusion_gligen_text_image.py
@@ -28,11 +28,11 @@
from diffusers import (
AutoencoderKL,
- CLIPImageProjection,
DDIMScheduler,
StableDiffusionGLIGENTextImagePipeline,
UNet2DConditionModel,
)
+from diffusers.pipelines.stable_diffusion import CLIPImageProjection
from diffusers.utils import load_image
from diffusers.utils.testing_utils import enable_full_determinism
diff --git a/tests/pipelines/stable_diffusion/test_stable_diffusion_image_variation.py b/tests/pipelines/stable_diffusion/test_stable_diffusion_image_variation.py
index 580c78675a92..b6d6c7b80c98 100644
--- a/tests/pipelines/stable_diffusion/test_stable_diffusion_image_variation.py
+++ b/tests/pipelines/stable_diffusion/test_stable_diffusion_image_variation.py
@@ -29,8 +29,16 @@
StableDiffusionImageVariationPipeline,
UNet2DConditionModel,
)
-from diffusers.utils import floats_tensor, load_image, load_numpy, nightly, slow, torch_device
-from diffusers.utils.testing_utils import enable_full_determinism, require_torch_gpu
+from diffusers.utils.testing_utils import (
+ enable_full_determinism,
+ floats_tensor,
+ load_image,
+ load_numpy,
+ nightly,
+ require_torch_gpu,
+ slow,
+ torch_device,
+)
from ..pipeline_params import IMAGE_VARIATION_BATCH_PARAMS, IMAGE_VARIATION_PARAMS
from ..test_pipelines_common import PipelineKarrasSchedulerTesterMixin, PipelineLatentTesterMixin, PipelineTesterMixin
diff --git a/tests/pipelines/stable_diffusion/test_stable_diffusion_img2img.py b/tests/pipelines/stable_diffusion/test_stable_diffusion_img2img.py
index 043825c2f75d..cf22fccd8232 100644
--- a/tests/pipelines/stable_diffusion/test_stable_diffusion_img2img.py
+++ b/tests/pipelines/stable_diffusion/test_stable_diffusion_img2img.py
@@ -32,13 +32,18 @@
StableDiffusionImg2ImgPipeline,
UNet2DConditionModel,
)
-from diffusers.utils import floats_tensor, load_image, load_numpy, nightly, slow, torch_device
from diffusers.utils.testing_utils import (
enable_full_determinism,
+ floats_tensor,
+ load_image,
+ load_numpy,
+ nightly,
require_torch_2,
require_torch_gpu,
run_test_in_subprocess,
skip_mps,
+ slow,
+ torch_device,
)
from ..pipeline_params import (
diff --git a/tests/pipelines/stable_diffusion/test_stable_diffusion_inpaint.py b/tests/pipelines/stable_diffusion/test_stable_diffusion_inpaint.py
index 4d75992b74a8..21e8c05ac28f 100644
--- a/tests/pipelines/stable_diffusion/test_stable_diffusion_inpaint.py
+++ b/tests/pipelines/stable_diffusion/test_stable_diffusion_inpaint.py
@@ -36,12 +36,17 @@
)
from diffusers.models.attention_processor import AttnProcessor
from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion_inpaint import prepare_mask_and_masked_image
-from diffusers.utils import floats_tensor, load_image, load_numpy, nightly, slow, torch_device
from diffusers.utils.testing_utils import (
enable_full_determinism,
+ floats_tensor,
+ load_image,
+ load_numpy,
+ nightly,
require_torch_2,
require_torch_gpu,
run_test_in_subprocess,
+ slow,
+ torch_device,
)
from ...models.test_models_unet_2d_condition import create_lora_layers
diff --git a/tests/pipelines/stable_diffusion/test_stable_diffusion_inpaint_legacy.py b/tests/pipelines/stable_diffusion/test_stable_diffusion_inpaint_legacy.py
index fa00a0d201af..45563cdb798b 100644
--- a/tests/pipelines/stable_diffusion/test_stable_diffusion_inpaint_legacy.py
+++ b/tests/pipelines/stable_diffusion/test_stable_diffusion_inpaint_legacy.py
@@ -33,8 +33,17 @@
UNet2DModel,
VQModel,
)
-from diffusers.utils import floats_tensor, load_image, nightly, slow, torch_device
-from diffusers.utils.testing_utils import enable_full_determinism, load_numpy, preprocess_image, require_torch_gpu
+from diffusers.utils.testing_utils import (
+ enable_full_determinism,
+ floats_tensor,
+ load_image,
+ load_numpy,
+ nightly,
+ preprocess_image,
+ require_torch_gpu,
+ slow,
+ torch_device,
+)
enable_full_determinism()
diff --git a/tests/pipelines/stable_diffusion/test_stable_diffusion_instruction_pix2pix.py b/tests/pipelines/stable_diffusion/test_stable_diffusion_instruction_pix2pix.py
index 513e11c105d5..07fd8e1b5192 100644
--- a/tests/pipelines/stable_diffusion/test_stable_diffusion_instruction_pix2pix.py
+++ b/tests/pipelines/stable_diffusion/test_stable_diffusion_instruction_pix2pix.py
@@ -32,8 +32,14 @@
UNet2DConditionModel,
)
from diffusers.image_processor import VaeImageProcessor
-from diffusers.utils import floats_tensor, load_image, slow, torch_device
-from diffusers.utils.testing_utils import enable_full_determinism, require_torch_gpu
+from diffusers.utils.testing_utils import (
+ enable_full_determinism,
+ floats_tensor,
+ load_image,
+ require_torch_gpu,
+ slow,
+ torch_device,
+)
from ..pipeline_params import (
IMAGE_TO_IMAGE_IMAGE_PARAMS,
diff --git a/tests/pipelines/stable_diffusion/test_stable_diffusion_k_diffusion.py b/tests/pipelines/stable_diffusion/test_stable_diffusion_k_diffusion.py
index 25da13d9f922..672c0ebfa0d8 100644
--- a/tests/pipelines/stable_diffusion/test_stable_diffusion_k_diffusion.py
+++ b/tests/pipelines/stable_diffusion/test_stable_diffusion_k_diffusion.py
@@ -20,8 +20,7 @@
import torch
from diffusers import StableDiffusionKDiffusionPipeline
-from diffusers.utils import slow, torch_device
-from diffusers.utils.testing_utils import enable_full_determinism, require_torch_gpu
+from diffusers.utils.testing_utils import enable_full_determinism, require_torch_gpu, slow, torch_device
enable_full_determinism()
diff --git a/tests/pipelines/stable_diffusion/test_stable_diffusion_ldm3d.py b/tests/pipelines/stable_diffusion/test_stable_diffusion_ldm3d.py
index e2164e8117ad..b812f1d3c257 100644
--- a/tests/pipelines/stable_diffusion/test_stable_diffusion_ldm3d.py
+++ b/tests/pipelines/stable_diffusion/test_stable_diffusion_ldm3d.py
@@ -28,8 +28,7 @@
StableDiffusionLDM3DPipeline,
UNet2DConditionModel,
)
-from diffusers.utils import nightly, slow, torch_device
-from diffusers.utils.testing_utils import enable_full_determinism, require_torch_gpu
+from diffusers.utils.testing_utils import enable_full_determinism, nightly, require_torch_gpu, slow, torch_device
from ..pipeline_params import TEXT_TO_IMAGE_BATCH_PARAMS, TEXT_TO_IMAGE_IMAGE_PARAMS, TEXT_TO_IMAGE_PARAMS
diff --git a/tests/pipelines/stable_diffusion/test_stable_diffusion_model_editing.py b/tests/pipelines/stable_diffusion/test_stable_diffusion_model_editing.py
index 81d1baed5df6..b7ddd2fd59f8 100644
--- a/tests/pipelines/stable_diffusion/test_stable_diffusion_model_editing.py
+++ b/tests/pipelines/stable_diffusion/test_stable_diffusion_model_editing.py
@@ -28,8 +28,7 @@
StableDiffusionModelEditingPipeline,
UNet2DConditionModel,
)
-from diffusers.utils import slow, torch_device
-from diffusers.utils.testing_utils import enable_full_determinism, require_torch_gpu, skip_mps
+from diffusers.utils.testing_utils import enable_full_determinism, require_torch_gpu, skip_mps, slow, torch_device
from ..pipeline_params import TEXT_TO_IMAGE_BATCH_PARAMS, TEXT_TO_IMAGE_IMAGE_PARAMS, TEXT_TO_IMAGE_PARAMS
from ..test_pipelines_common import PipelineKarrasSchedulerTesterMixin, PipelineLatentTesterMixin, PipelineTesterMixin
diff --git a/tests/pipelines/stable_diffusion/test_stable_diffusion_panorama.py b/tests/pipelines/stable_diffusion/test_stable_diffusion_panorama.py
index a10e74742c4d..657608df8b98 100644
--- a/tests/pipelines/stable_diffusion/test_stable_diffusion_panorama.py
+++ b/tests/pipelines/stable_diffusion/test_stable_diffusion_panorama.py
@@ -29,8 +29,7 @@
StableDiffusionPanoramaPipeline,
UNet2DConditionModel,
)
-from diffusers.utils import nightly, torch_device
-from diffusers.utils.testing_utils import enable_full_determinism, require_torch_gpu, skip_mps
+from diffusers.utils.testing_utils import enable_full_determinism, nightly, require_torch_gpu, skip_mps, torch_device
from ..pipeline_params import TEXT_TO_IMAGE_BATCH_PARAMS, TEXT_TO_IMAGE_IMAGE_PARAMS, TEXT_TO_IMAGE_PARAMS
from ..test_pipelines_common import PipelineLatentTesterMixin, PipelineTesterMixin
diff --git a/tests/pipelines/stable_diffusion/test_stable_diffusion_paradigms.py b/tests/pipelines/stable_diffusion/test_stable_diffusion_paradigms.py
index 781cbcbd69a1..3ce476d09be9 100644
--- a/tests/pipelines/stable_diffusion/test_stable_diffusion_paradigms.py
+++ b/tests/pipelines/stable_diffusion/test_stable_diffusion_paradigms.py
@@ -27,10 +27,11 @@
StableDiffusionParadigmsPipeline,
UNet2DConditionModel,
)
-from diffusers.utils import slow, torch_device
from diffusers.utils.testing_utils import (
enable_full_determinism,
require_torch_gpu,
+ slow,
+ torch_device,
)
from ..pipeline_params import TEXT_TO_IMAGE_BATCH_PARAMS, TEXT_TO_IMAGE_IMAGE_PARAMS, TEXT_TO_IMAGE_PARAMS
diff --git a/tests/pipelines/stable_diffusion/test_stable_diffusion_pix2pix_zero.py b/tests/pipelines/stable_diffusion/test_stable_diffusion_pix2pix_zero.py
index c513fb1c0b33..54b82f2f2487 100644
--- a/tests/pipelines/stable_diffusion/test_stable_diffusion_pix2pix_zero.py
+++ b/tests/pipelines/stable_diffusion/test_stable_diffusion_pix2pix_zero.py
@@ -33,8 +33,17 @@
UNet2DConditionModel,
)
from diffusers.image_processor import VaeImageProcessor
-from diffusers.utils import floats_tensor, load_numpy, nightly, torch_device
-from diffusers.utils.testing_utils import enable_full_determinism, load_image, load_pt, require_torch_gpu, skip_mps
+from diffusers.utils.testing_utils import (
+ enable_full_determinism,
+ floats_tensor,
+ load_image,
+ load_numpy,
+ load_pt,
+ nightly,
+ require_torch_gpu,
+ skip_mps,
+ torch_device,
+)
from ..pipeline_params import (
TEXT_GUIDED_IMAGE_VARIATION_BATCH_PARAMS,
diff --git a/tests/pipelines/stable_diffusion/test_stable_diffusion_sag.py b/tests/pipelines/stable_diffusion/test_stable_diffusion_sag.py
index 79d76666c392..b87d11e85876 100644
--- a/tests/pipelines/stable_diffusion/test_stable_diffusion_sag.py
+++ b/tests/pipelines/stable_diffusion/test_stable_diffusion_sag.py
@@ -26,8 +26,7 @@
StableDiffusionSAGPipeline,
UNet2DConditionModel,
)
-from diffusers.utils import slow, torch_device
-from diffusers.utils.testing_utils import enable_full_determinism, require_torch_gpu
+from diffusers.utils.testing_utils import enable_full_determinism, require_torch_gpu, slow, torch_device
from ..pipeline_params import TEXT_TO_IMAGE_BATCH_PARAMS, TEXT_TO_IMAGE_IMAGE_PARAMS, TEXT_TO_IMAGE_PARAMS
from ..test_pipelines_common import PipelineLatentTesterMixin, PipelineTesterMixin
diff --git a/tests/pipelines/stable_diffusion_2/test_stable_diffusion.py b/tests/pipelines/stable_diffusion_2/test_stable_diffusion.py
index 3991366966c3..3842dda2e551 100644
--- a/tests/pipelines/stable_diffusion_2/test_stable_diffusion.py
+++ b/tests/pipelines/stable_diffusion_2/test_stable_diffusion.py
@@ -32,12 +32,15 @@
UNet2DConditionModel,
logging,
)
-from diffusers.utils import load_numpy, nightly, slow, torch_device
from diffusers.utils.testing_utils import (
CaptureLogger,
enable_full_determinism,
+ load_numpy,
+ nightly,
numpy_cosine_similarity_distance,
require_torch_gpu,
+ slow,
+ torch_device,
)
from ..pipeline_params import TEXT_TO_IMAGE_BATCH_PARAMS, TEXT_TO_IMAGE_IMAGE_PARAMS, TEXT_TO_IMAGE_PARAMS
diff --git a/tests/pipelines/stable_diffusion_2/test_stable_diffusion_attend_and_excite.py b/tests/pipelines/stable_diffusion_2/test_stable_diffusion_attend_and_excite.py
index 3e280058b1f7..fcd6ff8d77f3 100644
--- a/tests/pipelines/stable_diffusion_2/test_stable_diffusion_attend_and_excite.py
+++ b/tests/pipelines/stable_diffusion_2/test_stable_diffusion_attend_and_excite.py
@@ -26,8 +26,13 @@
StableDiffusionAttendAndExcitePipeline,
UNet2DConditionModel,
)
-from diffusers.utils import load_numpy, skip_mps, slow
-from diffusers.utils.testing_utils import numpy_cosine_similarity_distance, require_torch_gpu
+from diffusers.utils.testing_utils import (
+ load_numpy,
+ numpy_cosine_similarity_distance,
+ require_torch_gpu,
+ skip_mps,
+ slow,
+)
from ..pipeline_params import TEXT_TO_IMAGE_BATCH_PARAMS, TEXT_TO_IMAGE_IMAGE_PARAMS, TEXT_TO_IMAGE_PARAMS
from ..test_pipelines_common import PipelineKarrasSchedulerTesterMixin, PipelineLatentTesterMixin, PipelineTesterMixin
diff --git a/tests/pipelines/stable_diffusion_2/test_stable_diffusion_depth.py b/tests/pipelines/stable_diffusion_2/test_stable_diffusion_depth.py
index 236bec5bac38..149c90698f1c 100644
--- a/tests/pipelines/stable_diffusion_2/test_stable_diffusion_depth.py
+++ b/tests/pipelines/stable_diffusion_2/test_stable_diffusion_depth.py
@@ -39,17 +39,18 @@
StableDiffusionDepth2ImgPipeline,
UNet2DConditionModel,
)
-from diffusers.utils import (
+from diffusers.utils import is_accelerate_available, is_accelerate_version
+from diffusers.utils.testing_utils import (
+ enable_full_determinism,
floats_tensor,
- is_accelerate_available,
- is_accelerate_version,
load_image,
load_numpy,
nightly,
+ require_torch_gpu,
+ skip_mps,
slow,
torch_device,
)
-from diffusers.utils.testing_utils import enable_full_determinism, require_torch_gpu, skip_mps
from ..pipeline_params import (
IMAGE_TO_IMAGE_IMAGE_PARAMS,
diff --git a/tests/pipelines/stable_diffusion_2/test_stable_diffusion_diffedit.py b/tests/pipelines/stable_diffusion_2/test_stable_diffusion_diffedit.py
index 8be8f276df25..c4cfaee9cf31 100644
--- a/tests/pipelines/stable_diffusion_2/test_stable_diffusion_diffedit.py
+++ b/tests/pipelines/stable_diffusion_2/test_stable_diffusion_diffedit.py
@@ -32,8 +32,15 @@
StableDiffusionDiffEditPipeline,
UNet2DConditionModel,
)
-from diffusers.utils import load_image, nightly, slow
-from diffusers.utils.testing_utils import enable_full_determinism, floats_tensor, require_torch_gpu, torch_device
+from diffusers.utils.testing_utils import (
+ enable_full_determinism,
+ floats_tensor,
+ load_image,
+ nightly,
+ require_torch_gpu,
+ slow,
+ torch_device,
+)
from ..pipeline_params import TEXT_GUIDED_IMAGE_INPAINTING_BATCH_PARAMS, TEXT_GUIDED_IMAGE_INPAINTING_PARAMS
from ..test_pipelines_common import PipelineLatentTesterMixin, PipelineTesterMixin
diff --git a/tests/pipelines/stable_diffusion_2/test_stable_diffusion_flax.py b/tests/pipelines/stable_diffusion_2/test_stable_diffusion_flax.py
index fa93da9052f7..358d137b5781 100644
--- a/tests/pipelines/stable_diffusion_2/test_stable_diffusion_flax.py
+++ b/tests/pipelines/stable_diffusion_2/test_stable_diffusion_flax.py
@@ -17,8 +17,8 @@
import unittest
from diffusers import FlaxDPMSolverMultistepScheduler, FlaxStableDiffusionPipeline
-from diffusers.utils import is_flax_available, nightly, slow
-from diffusers.utils.testing_utils import require_flax
+from diffusers.utils import is_flax_available
+from diffusers.utils.testing_utils import nightly, require_flax, slow
if is_flax_available():
diff --git a/tests/pipelines/stable_diffusion_2/test_stable_diffusion_flax_inpaint.py b/tests/pipelines/stable_diffusion_2/test_stable_diffusion_flax_inpaint.py
index 432619a79ddd..3d9e6c0dc5e1 100644
--- a/tests/pipelines/stable_diffusion_2/test_stable_diffusion_flax_inpaint.py
+++ b/tests/pipelines/stable_diffusion_2/test_stable_diffusion_flax_inpaint.py
@@ -17,8 +17,8 @@
import unittest
from diffusers import FlaxStableDiffusionInpaintPipeline
-from diffusers.utils import is_flax_available, load_image, slow
-from diffusers.utils.testing_utils import require_flax
+from diffusers.utils import is_flax_available, load_image
+from diffusers.utils.testing_utils import require_flax, slow
if is_flax_available():
diff --git a/tests/pipelines/stable_diffusion_2/test_stable_diffusion_inpaint.py b/tests/pipelines/stable_diffusion_2/test_stable_diffusion_inpaint.py
index 68a4b5132375..1e726b95960f 100644
--- a/tests/pipelines/stable_diffusion_2/test_stable_diffusion_inpaint.py
+++ b/tests/pipelines/stable_diffusion_2/test_stable_diffusion_inpaint.py
@@ -23,8 +23,15 @@
from transformers import CLIPTextConfig, CLIPTextModel, CLIPTokenizer
from diffusers import AutoencoderKL, PNDMScheduler, StableDiffusionInpaintPipeline, UNet2DConditionModel
-from diffusers.utils import floats_tensor, load_image, load_numpy, torch_device
-from diffusers.utils.testing_utils import enable_full_determinism, require_torch_gpu, slow
+from diffusers.utils.testing_utils import (
+ enable_full_determinism,
+ floats_tensor,
+ load_image,
+ load_numpy,
+ require_torch_gpu,
+ slow,
+ torch_device,
+)
from ..pipeline_params import TEXT_GUIDED_IMAGE_INPAINTING_BATCH_PARAMS, TEXT_GUIDED_IMAGE_INPAINTING_PARAMS
from ..test_pipelines_common import PipelineKarrasSchedulerTesterMixin, PipelineLatentTesterMixin, PipelineTesterMixin
diff --git a/tests/pipelines/stable_diffusion_2/test_stable_diffusion_latent_upscale.py b/tests/pipelines/stable_diffusion_2/test_stable_diffusion_latent_upscale.py
index ce55bddc4fe0..e20438a2af6b 100644
--- a/tests/pipelines/stable_diffusion_2/test_stable_diffusion_latent_upscale.py
+++ b/tests/pipelines/stable_diffusion_2/test_stable_diffusion_latent_upscale.py
@@ -30,8 +30,15 @@
UNet2DConditionModel,
)
from diffusers.schedulers import KarrasDiffusionSchedulers
-from diffusers.utils import floats_tensor, load_image, load_numpy, slow, torch_device
-from diffusers.utils.testing_utils import enable_full_determinism, require_torch_gpu
+from diffusers.utils.testing_utils import (
+ enable_full_determinism,
+ floats_tensor,
+ load_image,
+ load_numpy,
+ require_torch_gpu,
+ slow,
+ torch_device,
+)
from ..pipeline_params import TEXT_GUIDED_IMAGE_VARIATION_BATCH_PARAMS, TEXT_GUIDED_IMAGE_VARIATION_PARAMS
from ..test_pipelines_common import PipelineKarrasSchedulerTesterMixin, PipelineLatentTesterMixin, PipelineTesterMixin
diff --git a/tests/pipelines/stable_diffusion_2/test_stable_diffusion_upscale.py b/tests/pipelines/stable_diffusion_2/test_stable_diffusion_upscale.py
index ab7eb2e0fd99..2c0f37519ad8 100644
--- a/tests/pipelines/stable_diffusion_2/test_stable_diffusion_upscale.py
+++ b/tests/pipelines/stable_diffusion_2/test_stable_diffusion_upscale.py
@@ -24,8 +24,15 @@
from transformers import CLIPTextConfig, CLIPTextModel, CLIPTokenizer
from diffusers import AutoencoderKL, DDIMScheduler, DDPMScheduler, StableDiffusionUpscalePipeline, UNet2DConditionModel
-from diffusers.utils import floats_tensor, load_image, load_numpy, slow, torch_device
-from diffusers.utils.testing_utils import enable_full_determinism, require_torch_gpu
+from diffusers.utils.testing_utils import (
+ enable_full_determinism,
+ floats_tensor,
+ load_image,
+ load_numpy,
+ require_torch_gpu,
+ slow,
+ torch_device,
+)
enable_full_determinism()
diff --git a/tests/pipelines/stable_diffusion_2/test_stable_diffusion_v_pred.py b/tests/pipelines/stable_diffusion_2/test_stable_diffusion_v_pred.py
index a8c857d75532..6062f5edb80b 100644
--- a/tests/pipelines/stable_diffusion_2/test_stable_diffusion_v_pred.py
+++ b/tests/pipelines/stable_diffusion_2/test_stable_diffusion_v_pred.py
@@ -31,8 +31,14 @@
UNet2DConditionModel,
)
from diffusers.models.attention_processor import AttnProcessor
-from diffusers.utils import load_numpy, slow, torch_device
-from diffusers.utils.testing_utils import enable_full_determinism, numpy_cosine_similarity_distance, require_torch_gpu
+from diffusers.utils.testing_utils import (
+ enable_full_determinism,
+ load_numpy,
+ numpy_cosine_similarity_distance,
+ require_torch_gpu,
+ slow,
+ torch_device,
+)
enable_full_determinism()
diff --git a/tests/pipelines/stable_diffusion_safe/test_safe_diffusion.py b/tests/pipelines/stable_diffusion_safe/test_safe_diffusion.py
index 09e31aacfbc9..ce57ccadd4f8 100644
--- a/tests/pipelines/stable_diffusion_safe/test_safe_diffusion.py
+++ b/tests/pipelines/stable_diffusion_safe/test_safe_diffusion.py
@@ -24,8 +24,7 @@
from diffusers import AutoencoderKL, DDIMScheduler, LMSDiscreteScheduler, PNDMScheduler, UNet2DConditionModel
from diffusers.pipelines.stable_diffusion_safe import StableDiffusionPipelineSafe as StableDiffusionPipeline
-from diffusers.utils import floats_tensor, nightly, torch_device
-from diffusers.utils.testing_utils import require_torch_gpu
+from diffusers.utils.testing_utils import floats_tensor, nightly, require_torch_gpu, torch_device
class SafeDiffusionPipelineFastTests(unittest.TestCase):
diff --git a/tests/pipelines/stable_diffusion_xl/test_stable_diffusion_xl.py b/tests/pipelines/stable_diffusion_xl/test_stable_diffusion_xl.py
index 909f759ff745..dad52238f73a 100644
--- a/tests/pipelines/stable_diffusion_xl/test_stable_diffusion_xl.py
+++ b/tests/pipelines/stable_diffusion_xl/test_stable_diffusion_xl.py
@@ -32,8 +32,7 @@
UNet2DConditionModel,
UniPCMultistepScheduler,
)
-from diffusers.utils import torch_device
-from diffusers.utils.testing_utils import enable_full_determinism, require_torch_gpu
+from diffusers.utils.testing_utils import enable_full_determinism, require_torch_gpu, torch_device
from ..pipeline_params import TEXT_TO_IMAGE_BATCH_PARAMS, TEXT_TO_IMAGE_IMAGE_PARAMS, TEXT_TO_IMAGE_PARAMS
from ..test_pipelines_common import PipelineLatentTesterMixin, PipelineTesterMixin
diff --git a/tests/pipelines/stable_diffusion_xl/test_stable_diffusion_xl_adapter.py b/tests/pipelines/stable_diffusion_xl/test_stable_diffusion_xl_adapter.py
index afe7da3319c7..e71f103005a3 100644
--- a/tests/pipelines/stable_diffusion_xl/test_stable_diffusion_xl_adapter.py
+++ b/tests/pipelines/stable_diffusion_xl/test_stable_diffusion_xl_adapter.py
@@ -27,8 +27,7 @@
T2IAdapter,
UNet2DConditionModel,
)
-from diffusers.utils import floats_tensor
-from diffusers.utils.testing_utils import enable_full_determinism
+from diffusers.utils.testing_utils import enable_full_determinism, floats_tensor
from ..pipeline_params import TEXT_GUIDED_IMAGE_VARIATION_BATCH_PARAMS, TEXT_GUIDED_IMAGE_VARIATION_PARAMS
from ..test_pipelines_common import PipelineTesterMixin
diff --git a/tests/pipelines/stable_diffusion_xl/test_stable_diffusion_xl_img2img.py b/tests/pipelines/stable_diffusion_xl/test_stable_diffusion_xl_img2img.py
index 04cbb09f5196..b372971dedba 100644
--- a/tests/pipelines/stable_diffusion_xl/test_stable_diffusion_xl_img2img.py
+++ b/tests/pipelines/stable_diffusion_xl/test_stable_diffusion_xl_img2img.py
@@ -26,8 +26,7 @@
StableDiffusionXLImg2ImgPipeline,
UNet2DConditionModel,
)
-from diffusers.utils import floats_tensor, torch_device
-from diffusers.utils.testing_utils import enable_full_determinism, require_torch_gpu
+from diffusers.utils.testing_utils import enable_full_determinism, floats_tensor, require_torch_gpu, torch_device
from ..pipeline_params import (
IMAGE_TO_IMAGE_IMAGE_PARAMS,
diff --git a/tests/pipelines/stable_diffusion_xl/test_stable_diffusion_xl_inpaint.py b/tests/pipelines/stable_diffusion_xl/test_stable_diffusion_xl_inpaint.py
index dd8f8c18b09c..5d0def014ff5 100644
--- a/tests/pipelines/stable_diffusion_xl/test_stable_diffusion_xl_inpaint.py
+++ b/tests/pipelines/stable_diffusion_xl/test_stable_diffusion_xl_inpaint.py
@@ -32,8 +32,7 @@
UNet2DConditionModel,
UniPCMultistepScheduler,
)
-from diffusers.utils import floats_tensor, torch_device
-from diffusers.utils.testing_utils import enable_full_determinism, require_torch_gpu
+from diffusers.utils.testing_utils import enable_full_determinism, floats_tensor, require_torch_gpu, torch_device
from ..pipeline_params import TEXT_GUIDED_IMAGE_INPAINTING_BATCH_PARAMS, TEXT_GUIDED_IMAGE_INPAINTING_PARAMS
from ..test_pipelines_common import PipelineLatentTesterMixin, PipelineTesterMixin
diff --git a/tests/pipelines/stable_diffusion_xl/test_stable_diffusion_xl_instruction_pix2pix.py b/tests/pipelines/stable_diffusion_xl/test_stable_diffusion_xl_instruction_pix2pix.py
index 2608886ded98..ca4017d11b79 100644
--- a/tests/pipelines/stable_diffusion_xl/test_stable_diffusion_xl_instruction_pix2pix.py
+++ b/tests/pipelines/stable_diffusion_xl/test_stable_diffusion_xl_instruction_pix2pix.py
@@ -29,8 +29,7 @@
from diffusers.pipelines.stable_diffusion_xl.pipeline_stable_diffusion_xl_instruct_pix2pix import (
StableDiffusionXLInstructPix2PixPipeline,
)
-from diffusers.utils import floats_tensor, torch_device
-from diffusers.utils.testing_utils import enable_full_determinism
+from diffusers.utils.testing_utils import enable_full_determinism, floats_tensor, torch_device
from ..pipeline_params import (
IMAGE_TO_IMAGE_IMAGE_PARAMS,
diff --git a/tests/pipelines/test_pipelines.py b/tests/pipelines/test_pipelines.py
index 02fead022e89..927c5ec28518 100644
--- a/tests/pipelines/test_pipelines.py
+++ b/tests/pipelines/test_pipelines.py
@@ -62,24 +62,24 @@
from diffusers.utils import (
CONFIG_NAME,
WEIGHTS_NAME,
- floats_tensor,
- is_compiled_module,
- nightly,
- require_torch_2,
- slow,
- torch_device,
)
from diffusers.utils.testing_utils import (
CaptureLogger,
enable_full_determinism,
+ floats_tensor,
get_tests_dir,
load_numpy,
+ nightly,
require_compel,
require_flax,
require_onnxruntime,
+ require_torch_2,
require_torch_gpu,
run_test_in_subprocess,
+ slow,
+ torch_device,
)
+from diffusers.utils.torch_utils import is_compiled_module
enable_full_determinism()
diff --git a/tests/pipelines/test_pipelines_auto.py b/tests/pipelines/test_pipelines_auto.py
index e48a99c01e7d..bfdedd25babe 100644
--- a/tests/pipelines/test_pipelines_auto.py
+++ b/tests/pipelines/test_pipelines_auto.py
@@ -34,7 +34,7 @@
AUTO_INPAINT_PIPELINES_MAPPING,
AUTO_TEXT2IMAGE_PIPELINES_MAPPING,
)
-from diffusers.utils import slow
+from diffusers.utils.testing_utils import slow
PRETRAINED_MODEL_REPO_MAPPING = OrderedDict(
diff --git a/tests/pipelines/test_pipelines_common.py b/tests/pipelines/test_pipelines_common.py
index a6f828443cb0..c70ccc635780 100644
--- a/tests/pipelines/test_pipelines_common.py
+++ b/tests/pipelines/test_pipelines_common.py
@@ -455,12 +455,13 @@ def _test_inference_batch_single_identical(
# TODO same as above
test_mean_pixel_difference = torch_device != "mps"
+ generator_device = "cpu"
components = self.get_dummy_components()
pipe = self.pipeline_class(**components)
pipe.to(torch_device)
pipe.set_progress_bar_config(disable=None)
- inputs = self.get_dummy_inputs(torch_device)
+ inputs = self.get_dummy_inputs(generator_device)
logger = logging.get_logger(pipe.__module__)
logger.setLevel(level=diffusers.logging.FATAL)
@@ -624,7 +625,8 @@ def test_save_load_optional_components(self, expected_max_difference=1e-4):
for optional_component in pipe._optional_components:
setattr(pipe, optional_component, None)
- inputs = self.get_dummy_inputs(torch_device)
+ generator_device = "cpu"
+ inputs = self.get_dummy_inputs(generator_device)
output = pipe(**inputs)[0]
with tempfile.TemporaryDirectory() as tmpdir:
@@ -642,7 +644,7 @@ def test_save_load_optional_components(self, expected_max_difference=1e-4):
f"`{optional_component}` did not stay set to None after loading.",
)
- inputs = self.get_dummy_inputs(torch_device)
+ inputs = self.get_dummy_inputs(generator_device)
output_loaded = pipe_loaded(**inputs)[0]
max_diff = np.abs(to_np(output) - to_np(output_loaded)).max()
diff --git a/tests/pipelines/text_to_video/test_text_to_video.py b/tests/pipelines/text_to_video/test_text_to_video.py
index 801af7f6b4e6..e03c8fc5dfb6 100644
--- a/tests/pipelines/text_to_video/test_text_to_video.py
+++ b/tests/pipelines/text_to_video/test_text_to_video.py
@@ -25,8 +25,15 @@
TextToVideoSDPipeline,
UNet3DConditionModel,
)
-from diffusers.utils import is_xformers_available, load_numpy, require_torch_gpu, skip_mps, slow, torch_device
-from diffusers.utils.testing_utils import enable_full_determinism
+from diffusers.utils import is_xformers_available
+from diffusers.utils.testing_utils import (
+ enable_full_determinism,
+ load_numpy,
+ require_torch_gpu,
+ skip_mps,
+ slow,
+ torch_device,
+)
from ..pipeline_params import TEXT_TO_IMAGE_BATCH_PARAMS, TEXT_TO_IMAGE_PARAMS
from ..test_pipelines_common import PipelineTesterMixin
diff --git a/tests/pipelines/text_to_video/test_text_to_video_zero.py b/tests/pipelines/text_to_video/test_text_to_video_zero.py
index 8fc7254c52d1..02fb43a0b65b 100644
--- a/tests/pipelines/text_to_video/test_text_to_video_zero.py
+++ b/tests/pipelines/text_to_video/test_text_to_video_zero.py
@@ -18,7 +18,7 @@
import torch
from diffusers import DDIMScheduler, TextToVideoZeroPipeline
-from diffusers.utils import load_pt, require_torch_gpu, slow
+from diffusers.utils.testing_utils import load_pt, require_torch_gpu, slow
from ..test_pipelines_common import assert_mean_pixel_difference
diff --git a/tests/pipelines/text_to_video/test_video_to_video.py b/tests/pipelines/text_to_video/test_video_to_video.py
index 9e61ddcbbd3f..6b1c44ceb057 100644
--- a/tests/pipelines/text_to_video/test_video_to_video.py
+++ b/tests/pipelines/text_to_video/test_video_to_video.py
@@ -26,8 +26,14 @@
UNet3DConditionModel,
VideoToVideoSDPipeline,
)
-from diffusers.utils import floats_tensor, is_xformers_available, skip_mps
-from diffusers.utils.testing_utils import enable_full_determinism, slow, torch_device
+from diffusers.utils import is_xformers_available
+from diffusers.utils.testing_utils import (
+ enable_full_determinism,
+ floats_tensor,
+ skip_mps,
+ slow,
+ torch_device,
+)
from ..pipeline_params import (
TEXT_GUIDED_IMAGE_VARIATION_BATCH_PARAMS,
diff --git a/tests/pipelines/unclip/test_unclip.py b/tests/pipelines/unclip/test_unclip.py
index 46890904a3c6..111a8b918457 100644
--- a/tests/pipelines/unclip/test_unclip.py
+++ b/tests/pipelines/unclip/test_unclip.py
@@ -22,8 +22,15 @@
from diffusers import PriorTransformer, UnCLIPPipeline, UnCLIPScheduler, UNet2DConditionModel, UNet2DModel
from diffusers.pipelines.unclip.text_proj import UnCLIPTextProjModel
-from diffusers.utils import load_numpy, nightly, slow, torch_device
-from diffusers.utils.testing_utils import enable_full_determinism, require_torch_gpu, skip_mps
+from diffusers.utils.testing_utils import (
+ enable_full_determinism,
+ load_numpy,
+ nightly,
+ require_torch_gpu,
+ skip_mps,
+ slow,
+ torch_device,
+)
from ..pipeline_params import TEXT_TO_IMAGE_BATCH_PARAMS, TEXT_TO_IMAGE_PARAMS
from ..test_pipelines_common import PipelineTesterMixin, assert_mean_pixel_difference
diff --git a/tests/pipelines/unclip/test_unclip_image_variation.py b/tests/pipelines/unclip/test_unclip_image_variation.py
index 2604368104a3..6b4e2b0fc0b4 100644
--- a/tests/pipelines/unclip/test_unclip_image_variation.py
+++ b/tests/pipelines/unclip/test_unclip_image_variation.py
@@ -36,8 +36,16 @@
UNet2DModel,
)
from diffusers.pipelines.unclip.text_proj import UnCLIPTextProjModel
-from diffusers.utils import floats_tensor, load_numpy, slow, torch_device
-from diffusers.utils.testing_utils import enable_full_determinism, load_image, require_torch_gpu, skip_mps
+from diffusers.utils.testing_utils import (
+ enable_full_determinism,
+ floats_tensor,
+ load_image,
+ load_numpy,
+ require_torch_gpu,
+ skip_mps,
+ slow,
+ torch_device,
+)
from ..pipeline_params import IMAGE_VARIATION_BATCH_PARAMS, IMAGE_VARIATION_PARAMS
from ..test_pipelines_common import PipelineTesterMixin, assert_mean_pixel_difference
diff --git a/tests/pipelines/unidiffuser/test_unidiffuser.py b/tests/pipelines/unidiffuser/test_unidiffuser.py
index e9506f660e38..865a7cfa6933 100644
--- a/tests/pipelines/unidiffuser/test_unidiffuser.py
+++ b/tests/pipelines/unidiffuser/test_unidiffuser.py
@@ -20,8 +20,8 @@
UniDiffuserPipeline,
UniDiffuserTextDecoder,
)
-from diffusers.utils import floats_tensor, load_image, nightly, randn_tensor, slow, torch_device
-from diffusers.utils.testing_utils import require_torch_gpu
+from diffusers.utils.testing_utils import floats_tensor, load_image, nightly, require_torch_gpu, slow, torch_device
+from diffusers.utils.torch_utils import randn_tensor
from ..pipeline_params import TEXT_GUIDED_IMAGE_VARIATION_BATCH_PARAMS, TEXT_GUIDED_IMAGE_VARIATION_PARAMS
from ..test_pipelines_common import PipelineTesterMixin
diff --git a/tests/pipelines/vq_diffusion/test_vq_diffusion.py b/tests/pipelines/vq_diffusion/test_vq_diffusion.py
index 462d818f92d6..88e9f19df709 100644
--- a/tests/pipelines/vq_diffusion/test_vq_diffusion.py
+++ b/tests/pipelines/vq_diffusion/test_vq_diffusion.py
@@ -22,8 +22,7 @@
from diffusers import Transformer2DModel, VQDiffusionPipeline, VQDiffusionScheduler, VQModel
from diffusers.pipelines.vq_diffusion.pipeline_vq_diffusion import LearnedClassifierFreeSamplingEmbeddings
-from diffusers.utils import load_numpy, nightly, torch_device
-from diffusers.utils.testing_utils import require_torch_gpu
+from diffusers.utils.testing_utils import load_numpy, nightly, require_torch_gpu, torch_device
torch.backends.cuda.matmul.allow_tf32 = False
diff --git a/tests/pipelines/wuerstchen/test_wuerstchen_combined.py b/tests/pipelines/wuerstchen/test_wuerstchen_combined.py
index 7d2e98030b30..9b680da27871 100644
--- a/tests/pipelines/wuerstchen/test_wuerstchen_combined.py
+++ b/tests/pipelines/wuerstchen/test_wuerstchen_combined.py
@@ -21,8 +21,7 @@
from diffusers import DDPMWuerstchenScheduler, WuerstchenCombinedPipeline
from diffusers.pipelines.wuerstchen import PaellaVQModel, WuerstchenDiffNeXt, WuerstchenPrior
-from diffusers.utils import torch_device
-from diffusers.utils.testing_utils import enable_full_determinism, require_torch_gpu
+from diffusers.utils.testing_utils import enable_full_determinism, require_torch_gpu, torch_device
from ..test_pipelines_common import PipelineTesterMixin
diff --git a/tests/pipelines/wuerstchen/test_wuerstchen_decoder.py b/tests/pipelines/wuerstchen/test_wuerstchen_decoder.py
index 709e2c1a3436..7891056d10c5 100644
--- a/tests/pipelines/wuerstchen/test_wuerstchen_decoder.py
+++ b/tests/pipelines/wuerstchen/test_wuerstchen_decoder.py
@@ -21,8 +21,7 @@
from diffusers import DDPMWuerstchenScheduler, WuerstchenDecoderPipeline
from diffusers.pipelines.wuerstchen import PaellaVQModel, WuerstchenDiffNeXt
-from diffusers.utils import torch_device
-from diffusers.utils.testing_utils import enable_full_determinism, skip_mps
+from diffusers.utils.testing_utils import enable_full_determinism, skip_mps, torch_device
from ..test_pipelines_common import PipelineTesterMixin
diff --git a/tests/pipelines/wuerstchen/test_wuerstchen_prior.py b/tests/pipelines/wuerstchen/test_wuerstchen_prior.py
index a255a665c48e..045729b30b6c 100644
--- a/tests/pipelines/wuerstchen/test_wuerstchen_prior.py
+++ b/tests/pipelines/wuerstchen/test_wuerstchen_prior.py
@@ -21,8 +21,7 @@
from diffusers import DDPMWuerstchenScheduler, WuerstchenPriorPipeline
from diffusers.pipelines.wuerstchen import WuerstchenPrior
-from diffusers.utils import torch_device
-from diffusers.utils.testing_utils import enable_full_determinism, skip_mps
+from diffusers.utils.testing_utils import enable_full_determinism, skip_mps, torch_device
from ..test_pipelines_common import PipelineTesterMixin
@@ -146,7 +145,6 @@ def test_wuerstchen_prior(self):
image_slice = image[0, 0, 0, -10:]
image_from_tuple_slice = image_from_tuple[0, 0, 0, -10:]
-
assert image.shape == (1, 2, 24, 24)
expected_slice = np.array(
@@ -161,7 +159,7 @@ def test_wuerstchen_prior(self):
218.00089,
-2731.5745,
-8056.734,
- ],
+ ]
)
assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-2
assert np.abs(image_from_tuple_slice.flatten() - expected_slice).max() < 1e-2
@@ -176,7 +174,7 @@ def test_inference_batch_single_identical(self):
test_max_difference=test_max_difference,
relax_max_difference=relax_max_difference,
test_mean_pixel_difference=test_mean_pixel_difference,
- expected_max_diff=1e-1,
+ expected_max_diff=2e-1,
)
@skip_mps
diff --git a/tests/schedulers/test_scheduler_dpm_sde.py b/tests/schedulers/test_scheduler_dpm_sde.py
index 7906c8d5d4e9..253a0a478b41 100644
--- a/tests/schedulers/test_scheduler_dpm_sde.py
+++ b/tests/schedulers/test_scheduler_dpm_sde.py
@@ -1,8 +1,7 @@
import torch
from diffusers import DPMSolverSDEScheduler
-from diffusers.utils import torch_device
-from diffusers.utils.testing_utils import require_torchsde
+from diffusers.utils.testing_utils import require_torchsde, torch_device
from .test_schedulers import SchedulerCommonTest
diff --git a/tests/schedulers/test_scheduler_euler.py b/tests/schedulers/test_scheduler_euler.py
index 0c3b065161db..2aba46ba3381 100644
--- a/tests/schedulers/test_scheduler_euler.py
+++ b/tests/schedulers/test_scheduler_euler.py
@@ -1,7 +1,7 @@
import torch
from diffusers import EulerDiscreteScheduler
-from diffusers.utils import torch_device
+from diffusers.utils.testing_utils import torch_device
from .test_schedulers import SchedulerCommonTest
diff --git a/tests/schedulers/test_scheduler_euler_ancestral.py b/tests/schedulers/test_scheduler_euler_ancestral.py
index 9866bd12d6af..b2887e89b720 100644
--- a/tests/schedulers/test_scheduler_euler_ancestral.py
+++ b/tests/schedulers/test_scheduler_euler_ancestral.py
@@ -1,7 +1,7 @@
import torch
from diffusers import EulerAncestralDiscreteScheduler
-from diffusers.utils import torch_device
+from diffusers.utils.testing_utils import torch_device
from .test_schedulers import SchedulerCommonTest
diff --git a/tests/schedulers/test_scheduler_heun.py b/tests/schedulers/test_scheduler_heun.py
index ae0fe26b11ba..69f6526b673a 100644
--- a/tests/schedulers/test_scheduler_heun.py
+++ b/tests/schedulers/test_scheduler_heun.py
@@ -1,7 +1,7 @@
import torch
from diffusers import HeunDiscreteScheduler
-from diffusers.utils import torch_device
+from diffusers.utils.testing_utils import torch_device
from .test_schedulers import SchedulerCommonTest
diff --git a/tests/schedulers/test_scheduler_kdpm2_ancestral.py b/tests/schedulers/test_scheduler_kdpm2_ancestral.py
index 45371121e66b..b3d391ac8a83 100644
--- a/tests/schedulers/test_scheduler_kdpm2_ancestral.py
+++ b/tests/schedulers/test_scheduler_kdpm2_ancestral.py
@@ -1,7 +1,7 @@
import torch
from diffusers import KDPM2AncestralDiscreteScheduler
-from diffusers.utils import torch_device
+from diffusers.utils.testing_utils import torch_device
from .test_schedulers import SchedulerCommonTest
diff --git a/tests/schedulers/test_scheduler_kdpm2_discrete.py b/tests/schedulers/test_scheduler_kdpm2_discrete.py
index 4f1bd1f8aeb7..4876caaa996f 100644
--- a/tests/schedulers/test_scheduler_kdpm2_discrete.py
+++ b/tests/schedulers/test_scheduler_kdpm2_discrete.py
@@ -1,7 +1,7 @@
import torch
from diffusers import KDPM2DiscreteScheduler
-from diffusers.utils import torch_device
+from diffusers.utils.testing_utils import torch_device
from .test_schedulers import SchedulerCommonTest
diff --git a/tests/schedulers/test_scheduler_lms.py b/tests/schedulers/test_scheduler_lms.py
index 1e0a8212354d..cd5376d305c4 100644
--- a/tests/schedulers/test_scheduler_lms.py
+++ b/tests/schedulers/test_scheduler_lms.py
@@ -1,7 +1,7 @@
import torch
from diffusers import LMSDiscreteScheduler
-from diffusers.utils import torch_device
+from diffusers.utils.testing_utils import torch_device
from .test_schedulers import SchedulerCommonTest
diff --git a/tests/schedulers/test_schedulers.py b/tests/schedulers/test_schedulers.py
index 4b1834f62a4e..b936b6334627 100755
--- a/tests/schedulers/test_schedulers.py
+++ b/tests/schedulers/test_schedulers.py
@@ -40,8 +40,7 @@
)
from diffusers.configuration_utils import ConfigMixin, register_to_config
from diffusers.schedulers.scheduling_utils import SchedulerMixin
-from diffusers.utils import torch_device
-from diffusers.utils.testing_utils import CaptureLogger
+from diffusers.utils.testing_utils import CaptureLogger, torch_device
from ..others.test_utils import TOKEN, USER, is_staging_test
diff --git a/utils/check_copies.py b/utils/check_copies.py
index 0ba573bb920e..df5816b4ac03 100644
--- a/utils/check_copies.py
+++ b/utils/check_copies.py
@@ -15,7 +15,6 @@
import argparse
import glob
-import importlib.util
import os
import re
@@ -29,15 +28,6 @@
REPO_PATH = "."
-# This is to make sure the diffusers module imported is the one in the repo.
-spec = importlib.util.spec_from_file_location(
- "diffusers",
- os.path.join(DIFFUSERS_PATH, "__init__.py"),
- submodule_search_locations=[DIFFUSERS_PATH],
-)
-diffusers_module = spec.loader.load_module()
-
-
def _should_continue(line, indent):
return line.startswith(indent) or len(line) <= 1 or re.search(r"^\s*\)(\s*->.*:|:)\s*$", line) is not None
diff --git a/utils/check_dummies.py b/utils/check_dummies.py
index 16b7c8c117dc..8754babc554b 100644
--- a/utils/check_dummies.py
+++ b/utils/check_dummies.py
@@ -71,24 +71,27 @@ def read_init():
# Get to the point we do the actual imports for type checking
line_index = 0
+ while not lines[line_index].startswith("if TYPE_CHECKING"):
+ line_index += 1
+
backend_specific_objects = {}
# Go through the end of the file
while line_index < len(lines):
# If the line contains is_backend_available, we grab all objects associated with the `else` block
backend = find_backend(lines[line_index])
if backend is not None:
- while not lines[line_index].startswith("else:"):
+ while not lines[line_index].startswith(" else:"):
line_index += 1
line_index += 1
objects = []
# Until we unindent, add backend objects to the list
- while line_index < len(lines) and len(lines[line_index]) > 1:
+ while len(lines[line_index]) <= 1 or lines[line_index].startswith(" " * 8):
line = lines[line_index]
single_line_import_search = _re_single_line_import.search(line)
if single_line_import_search is not None:
objects.extend(single_line_import_search.groups()[0].split(", "))
- elif line.startswith(" " * 8):
- objects.append(line[8:-2])
+ elif line.startswith(" " * 12):
+ objects.append(line[12:-2])
line_index += 1
if len(objects) > 0: