-
Notifications
You must be signed in to change notification settings - Fork 43
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error when using enable_sequential_cpu_offload() #23
Comments
No idea. @patrickvonplaten any hints? |
Ah I see @cmdr2 I think when you use from diffusers import StableDiffusionPipeline
from compel import Compel
pipeline = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5")
pipeline.to("cuda")
compel = Compel(tokenizer=pipeline.tokenizer, text_encoder=pipeline.text_encoder)
# upweight "ball"
prompt = "a cat playing with a ball++ in the forest"
conditioning = compel.build_conditioning_tensor(prompt)
# or: conditioning = compel([prompt])
# generate image
pipeline.enable_sequential_cpu_offload()
images = pipeline(prompt_embeds=conditioning, num_inference_steps=20).images
images[0].save("image.jpg") |
Thanks @patrickvonplaten and @damian0815 ! |
Hi @patrickvonplaten and @damian0815 , sorry for reopening this issue, but the suggestion in the previous message only works for the first prompt. The next prompt fails again with Here's the modified version of the suggestion (which essentially just calls from diffusers import StableDiffusionPipeline
from compel import Compel
pipeline = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5")
pipeline.to("cuda")
compel = Compel(tokenizer=pipeline.tokenizer, text_encoder=pipeline.text_encoder)
# 1. upweight "ball"
prompt = "a cat playing with a ball++ in the forest"
conditioning = compel.build_conditioning_tensor(prompt)
# or: conditioning = compel([prompt])
# generate image
pipeline.enable_sequential_cpu_offload()
images = pipeline(prompt_embeds=conditioning, num_inference_steps=4).images
images[0].save("image.jpg")
# ----- do this again -----
prompt = "a cat playing with a ball++ in the forest"
conditioning = compel.build_conditioning_tensor(prompt) This fails with the same stack-trace from my first message. It's very strange, because in theory the I even tried monkey-patching the Thanks for any help with this! :) |
Created a PR for this - #31 After this PR, the following code should work: from diffusers import StableDiffusionPipeline
from compel import Compel
pipeline = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5")
# pipeline.to("cuda")
pipeline.enable_sequential_cpu_offload()
compel = Compel(tokenizer=pipeline.tokenizer, text_encoder=pipeline.text_encoder, device="cuda:0")
# 1. upweight "ball"
prompt = "a cat playing with a ball++ in the forest"
conditioning = compel.build_conditioning_tensor(prompt)
# or: conditioning = compel([prompt])
# generate image
images = pipeline(prompt_embeds=conditioning, num_inference_steps=4).images
images[0].save("image.jpg")
# ----- do this again -----
prompt = "a cat playing with a ball++ in the forest"
conditioning = compel.build_conditioning_tensor(prompt) |
The hacky temporary workaround is to move the text_encoder modules back to the GPU, each time we need to build a tensor. I think this is what @patrickvonplaten meant? This works for me (but PR #31 is obviously better!): from diffusers import StableDiffusionPipeline
from compel import Compel
pipeline = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5")
# pipeline.to("cuda")
pipeline.enable_sequential_cpu_offload()
compel = Compel(tokenizer=pipeline.tokenizer, text_encoder=pipeline.text_encoder)
# 1. upweight "ball"
[m._hf_hook.pre_forward(m) for m in pipeline.text_encoder.modules() if hasattr(m, "_hf_hook")]
print("moved to", pipeline.text_encoder.device)
prompt = "a cat playing with a ball++ in the forest"
conditioning = compel.build_conditioning_tensor(prompt)
# or: conditioning = compel([prompt])
# generate image
images = pipeline(prompt_embeds=conditioning, num_inference_steps=4).images
images[0].save("image.jpg")
# ----- do this again -----
[m._hf_hook.pre_forward(m) for m in pipeline.text_encoder.modules() if hasattr(m, "_hf_hook")]
print("moved to", pipeline.text_encoder.device)
prompt = "a cat playing with a ball++ in the forest"
conditioning = compel.build_conditioning_tensor(prompt) |
Fix #23 - enable compel to work with enable_sequential_cpu_offload(), by optionally specifying the exact device to create the tensors on
Hi,
I'm getting an error when applying
pipe.enable_sequential_cpu_offload()
as per: https://huggingface.co/docs/diffusers/optimization/fp16#offloading-to-cpu-with-accelerate-for-memory-savingscompel works fine if we don't apply CPU offloading, but fails with it. diffusers+cpu_offloading works fine without compel.
compel: 1.0.5
diffusers: 0.14.0
accelerate: 0.15.0
transformers: 4.26.1
Modified version of the example from the README, using
enable_sequential_cpu_offload()
:We get:
NotImplementedError: Cannot copy out of meta tensor; no data!
Full stacktrace:
Any ideas for this problem would be really appreciated!
Thanks!
The text was updated successfully, but these errors were encountered: