Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Hunyuan Video adjustments #11140

Open
wants to merge 12 commits into
base: main
Choose a base branch
from
Open

Conversation

Ednaordinary
Copy link

What does this PR do?

Two things

The STG Hunyuan Video community pipeline had a typo where the forward methods were being replaced within the denoising loop. This also caused callback_on_step_end to return with the last index in the stg index list since the variable used to iterate over the forward methods was also i. This only seemed to appear in the HunyuanVideo stg pipeline but I may have missed something.

UniPCMultistepScheduler(prediction_type="flow_prediction", use_flow_sigmas=True, flow_shift=shift) works for HunyuanVideo but errors out with the current behavior. This changes that to instead just warn the user (there may be a cleaner way of doing this)

Before submitting

Who can review?

@yiyixuxu @asomoza

Copy link
Member

@hlky hlky left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @Ednaordinary. I've left some comments. Ideally we would support sigmas in UniPCMultistepScheduler's set_timesteps as retrieve_timesteps is a # Copied from function and it is preferable to keep it that way. Some models/pipelines require a specific sigma schedule that is passed from pipeline, so prediction_type/use_flow_sigmas/defaulting to passing num_inference_steps may work for some cases but potentially not all.
I will let @yiyixuxu comment further on that change, and we will need to update it everywhere if we go ahead with it (this process is easy though, we would simply update diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.retrieve_timesteps then run make fix-copies).

if self.do_spatio_temporal_guidance:
for i in stg_applied_layers_idx:
self.transformer.transformer_blocks[i].forward = types.MethodType(
forward_without_stg, self.transformer.transformer_blocks[i]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note forward_without_stg vs forward_with_stg. Let's use something like stg_idx here so it doesn't conflict with index of enumerate(timesteps).

However, any results you have to share using this PR would be interesting, as it is using forward_with_stg for both noise_pred and noise_pred_perturb.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, thanks for the catch, I'll update. I found that this implementation became scheduler agnostic (or stg became the 'scheduler'?). I'll test that a bit more and see what the exact side effects are

@Ednaordinary
Copy link
Author

Ednaordinary commented Mar 23, 2025

Hi @hlky! Thanks for your review. I agree this change could cause some weirdness with sigmas. I don't believe this would work with all pipelines since hunyuanvideo passes num_inference_steps and sigmas which retrieve_timesteps states not to do (but allows us to ignore sigmas if the scheduler does not accept them). A possible cleaner change is to leave retrieve_timesteps how it is and instead introduce a ignore_custom_sigmas value in the call function. Then, we could do:

if ignore_custom_sigmas:
    timesteps, num_inference_steps = retrieve_timesteps(self.scheduler, device=device, num_inference_steps=num_inference_steps)
else:
    sigmas = np.linspace(1.0, 0.0, num_inference_steps + 1)[:-1] if sigmas is None else sigmas
    timesteps, num_inference_steps = retrieve_timesteps(self.scheduler, device=device, sigmas=sigmas)

This would have to be found by the user if they try to use UniPC or a non-accepted scheduler as a scheduler though. Not sure what a clean way to add a hint if that happens is.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants