-
Notifications
You must be signed in to change notification settings - Fork 5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
excessive graph breaks on attention.py
and attention_processor.py
for control_net on torch.compile
#3218
Comments
Cc: @pcuenca |
@shingjan, we advise to only optimize the pipe.unet = torch.compile(pipe.unet, backend='inductor') |
@patrickvonplaten thanks for the response! I think AttnProcessor2_0 is heavily used in unet so even if only |
I don't fully understand this, what exactly is the issue here? Can we reproduce it somehow? |
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the contributing guidelines are likely to be ignored. |
@patrickvonplaten Sorry for the late reply. Yes I did a rebase and most of the graph breaks seen on |
Describe the bug
I tried to run the controlnet example from this blog post and it turned out that the
BasicTransformerBlock
is causing a large number of graph breaks (>100) on a single controlnet pipeline. Ideally the wholeBasicTransformerBlock.forward
should be include in one single frame for speedups. The exact reason for the graph breaks is:for both
self attention
andcross attention
. Is there a way to reduce the graph breaks to makeStableDiffusionControlNetPipeline
working better withtorch.compile
?Reproduction
System Info
Ubuntu 20.04 with cuda 11.8
diffusers 0.16.0.dev0 /home/yj/diffusers
torch 2.1.0a0+git0bbf8a9 /home/yj/pytorch
The text was updated successfully, but these errors were encountered: