-
Notifications
You must be signed in to change notification settings - Fork 6.5k
Closed
Labels
bugSomething isn't workingSomething isn't working
Description
Describe the bug
The parameterized attention is always passed a temb keyword argument, and many attention processors do not expect it, causing a type error.
A (currently incomplete) fix for this is #6687
Reproduction
from diffusers import DiffusionPipeline
from diffusers.models.attention_processor import SlicedAttnProcessor
pipe = DiffusionPipeline.from_pretrained(
"hf-internal-testing/tiny-stable-diffusion-torch",
)
pipe(prompt="a trout", num_inference_steps=1).images[0].save("trout1.png")
pipe.vae.set_attn_processor(SlicedAttnProcessor(1))
pipe(prompt="a trout", num_inference_steps=1).images[0].save("trout2.png")
Logs
Traceback (most recent call last):
File "/Users/lsb/jupyterlab/sliced-attention-temb.py", line 12, in <module>
pipe(prompt="a trout", num_inference_steps=1).images[0].save("trout2.png")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/lsb/jupyterlab/.direnv/python-3.11.7/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/Users/lsb/jupyterlab/.direnv/python-3.11.7/lib/python3.11/site-packages/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py", line 1042, in __call__
image = self.vae.decode(latents / self.vae.config.scaling_factor, return_dict=False, generator=generator)[
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/lsb/jupyterlab/.direnv/python-3.11.7/lib/python3.11/site-packages/diffusers/utils/accelerate_utils.py", line 46, in wrapper
return method(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/lsb/jupyterlab/.direnv/python-3.11.7/lib/python3.11/site-packages/diffusers/models/autoencoders/autoencoder_kl.py", line 304, in decode
decoded = self._decode(z).sample
^^^^^^^^^^^^^^^
File "/Users/lsb/jupyterlab/.direnv/python-3.11.7/lib/python3.11/site-packages/diffusers/models/autoencoders/autoencoder_kl.py", line 275, in _decode
dec = self.decoder(z)
^^^^^^^^^^^^^^^
File "/Users/lsb/jupyterlab/.direnv/python-3.11.7/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/lsb/jupyterlab/.direnv/python-3.11.7/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/lsb/jupyterlab/.direnv/python-3.11.7/lib/python3.11/site-packages/diffusers/models/autoencoders/vae.py", line 333, in forward
sample = self.mid_block(sample, latent_embeds)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/lsb/jupyterlab/.direnv/python-3.11.7/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/lsb/jupyterlab/.direnv/python-3.11.7/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/lsb/jupyterlab/.direnv/python-3.11.7/lib/python3.11/site-packages/diffusers/models/unet_2d_blocks.py", line 624, in forward
hidden_states = attn(hidden_states, temb=temb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/lsb/jupyterlab/.direnv/python-3.11.7/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/lsb/jupyterlab/.direnv/python-3.11.7/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/lsb/jupyterlab/.direnv/python-3.11.7/lib/python3.11/site-packages/diffusers/models/attention_processor.py", line 527, in forward
return self.processor(
^^^^^^^^^^^^^^^
TypeError: SlicedAttnProcessor.__call__() got an unexpected keyword argument 'temb'System Info
diffusersversion: 0.26.0.dev0- Platform: macOS-14.2.1-arm64-arm-64bit
- Python version: 3.11.7
- PyTorch version (GPU?): 2.1.2 (False)
- Huggingface_hub version: 0.20.3
- Transformers version: 4.36.2
- Accelerate version: 0.25.0
- xFormers version: not installed
- Using GPU in script?:
- Using distributed or parallel set-up in script?: no
Who can help?
No response
Metadata
Metadata
Assignees
Labels
bugSomething isn't workingSomething isn't working