You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I tried to run the diffusers pipeline for LEDITS++ with LEditsPPPipelineStableDiffusionXL but I encounter a Cuda Out of memory error, which I find abnormal, since the error states that it tried allocating other 136 GiB.
Here are all the information needed
Reproduction
Created and activated virtual env python -m venv .leditspp_env && source .leditspp_env/bin/activate
Installed accelerate and transformers pip install accelerate transformers
$ python test_leditspp.py
Loading pipeline components...: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 7/7 [00:03<00:00, 2.02it/s]
This pipeline only supports DDIMScheduler and DPMSolverMultistepScheduler. The scheduler has been changed to DPMSolverMultistepScheduler.
Your input images far exceed the default resolution of the underlying diffusion model. The output images may contain severe artifacts! Consider down-sampling the input using the `height` and `width` parameters
Traceback (most recent call last):
File "/home/vdelale/code/test_leditspp.py", line 20, in<module>
_ = pipe.invert(
File "/home/vdelale/code/.leditspp_env/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/home/vdelale/code/.leditspp_env/lib/python3.10/site-packages/diffusers/pipelines/ledits_pp/pipeline_leditspp_stable_diffusion_xl.py", line 1576, in invert
image_rec = self.vae.decode(
File "/home/vdelale/code/.leditspp_env/lib/python3.10/site-packages/diffusers/utils/accelerate_utils.py", line 46, in wrapper
return method(self, *args, **kwargs)
File "/home/vdelale/code/.leditspp_env/lib/python3.10/site-packages/diffusers/models/autoencoders/autoencoder_kl.py", line 303, in decode
decoded = self._decode(z, return_dict=False)[0]
File "/home/vdelale/code/.leditspp_env/lib/python3.10/site-packages/diffusers/models/autoencoders/autoencoder_kl.py", line 276, in _decode
dec = self.decoder(z)
File "/home/vdelale/code/.leditspp_env/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/vdelale/code/.leditspp_env/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "/home/vdelale/code/.leditspp_env/lib/python3.10/site-packages/diffusers/models/autoencoders/vae.py", line 337, in forward
sample = up_block(sample, latent_embeds)
File "/home/vdelale/code/.leditspp_env/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/vdelale/code/.leditspp_env/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "/home/vdelale/code/.leditspp_env/lib/python3.10/site-packages/diffusers/models/unets/unet_2d_blocks.py", line 2750, in forward
hidden_states = upsampler(hidden_states)
File "/home/vdelale/code/.leditspp_env/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/vdelale/code/.leditspp_env/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "/home/vdelale/code/.leditspp_env/lib/python3.10/site-packages/diffusers/models/upsampling.py", line 180, in forward
hidden_states = self.conv(hidden_states)
File "/home/vdelale/code/.leditspp_env/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/vdelale/code/.leditspp_env/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "/home/vdelale/code/.leditspp_env/lib/python3.10/site-packages/torch/nn/modules/conv.py", line 460, in forward
return self._conv_forward(input, self.weight, self.bias)
File "/home/vdelale/code/.leditspp_env/lib/python3.10/site-packages/torch/nn/modules/conv.py", line 456, in _conv_forward
return F.conv2d(input, weight, bias, self.stride,
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 136.51 GiB. GPU has a total capacity of 79.11 GiB of which 21.06 GiB is free. Process 33554 has 1.25 GiB memory in use. Process 3812073 has 2.32 GiB memory in use. Process 3812837 has 1.04 GiB memory in use. Including non-PyTorch memory, this process has 53.40 GiB memory in use. Of the allocated memory 48.46 GiB is allocated by PyTorch, and 4.21 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
Hey @vdelale! I missed this bug -
I think it's related to the size of the image,
does it still error if you resize it?
e.g. by adding image = image.resize((512,512))
Sorry for the long wait, yes it worked.
However, I encountered an other error, which was the same as mentionned in #7972.
Curiously, the error did not occur at the first call of the generation, but only the subsequent ones.
This time, I added some lines to the source code of diffusers - mainly in pipeline_leditspp_stable_diffusion_xl.py and some other scripts to cast the tensors to the right device and torch.dtype.
Describe the bug
I tried to run the diffusers pipeline for LEDITS++ with
LEditsPPPipelineStableDiffusionXL
but I encounter a Cuda Out of memory error, which I find abnormal, since the error states that it tried allocating other 136 GiB.Here are all the information needed
Reproduction
python -m venv .leditspp_env && source .leditspp_env/bin/activate
pip install accelerate transformers
pip install git+https://github.com/huggingface/diffusers
as mentionned on Hugging face installation - install from source. I needed to install it from source otherwise I had the error I reported in Type mismatch for LEDITS++ #7972Logs
System Info
OS: Ubuntu 22.04.3 LTS
python version: Python 3.10.12
python packages:
H100 GPU (memory: 80 GiB approximately)
Who can help?
Maybe @yiyixuxu @sayakpaul or @DN6, I don't know to what extent
LEditsPPPipelineStableDiffusionXL
is linked toStableDiffusionXLPipeline
The text was updated successfully, but these errors were encountered: