Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update LCMScheduler Inference Timesteps to be More Evenly Spaced #5836

Merged

Conversation

dg845
Copy link
Contributor

@dg845 dg845 commented Nov 17, 2023

What does this PR do?

This PR modifies LCMScheduler.set_timesteps to produce timesteps which are more evenly spaced across the range of candidate timesteps, which should lead to better sample quality.

Fixes #5815.

Before submitting

Who can review?

Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.

@patrickvonplaten
@patil-suraj
@aifartist
@luosiallen

@dg845
Copy link
Contributor Author

dg845 commented Nov 17, 2023

The solution I have currently implemented differs from the solution suggested in #5815 (comment) as follows:

  • Instead of generating timesteps directly, we select indices from the training/distillation schedule lcm_origin_timesteps. This should ensure that we always end up with timesteps that we saw during training/distillation.
  • Because np.linspace(0, len(lcm_origin_timesteps) - 1, num=num_inference_steps) is used to generate the indices, the first and last timesteps in the final inference timestep schedule are always anchored to the first and last timesteps in lcm_origin_timesteps. (Under default settings, this would be 999 and 19 respectively.)

@dg845
Copy link
Contributor Author

dg845 commented Nov 17, 2023

(Also, note that the timestep schedule produced by LCMScheduler.set_timesteps will in general be different after the change, so any test that uses LCMScheduler with num_inference_steps > 1 will probably need to be fixed.)

@luosiallen
Copy link

I think we want to avoid the cases that 3-step sampling is very similar to 2-step sampling, which means that we don't want setting anchor for the first timesteps 19. For example, i think the schedule of 2-step [999, 499] is better than [999, 19].

Copy link
Contributor

@patil-suraj patil-suraj left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks a lot for fixing this @dg845 , lgtm!

Will run some tests then we can merge.

@HuggingFaceDocBuilderDev
Copy link

HuggingFaceDocBuilderDev commented Nov 20, 2023

The documentation is not available anymore as the PR was closed or merged.

Copy link
Contributor

@patrickvonplaten patrickvonplaten left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I trust you here @patil-suraj

Copy link
Contributor

@patrickvonplaten patrickvonplaten left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you consider the previous timestep spacing a bug, I'm ok with the PR. If not and it's just a "different way" of doing timestep spacing, we should probably be careful about backwards breaking

@luosiallen
Copy link

Should we do more test? or just like @patrickvonplaten suggestion. We can just add timestep [list] input in the __call__, so the users can figure out what's the best for them.

@patil-suraj
Copy link
Contributor

I'm currently running the slow tests that we have. All the fast tests are passing.
This timestep schedule is correct, as it matches the schedule during training, so it should be the default IMO. We could add the timestep argument to __call__ for additional schedules.

@luosiallen
Copy link

I'm currently running the slow tests that we have. All the fast tests are passing.
This timestep schedule is correct, as it matches the schedule during training, so it should be the default IMO. We could add the timestep argument to __call__ for additional schedules.

Thanks @patil-suraj !

@patil-suraj
Copy link
Contributor

Some slow tests are failing,

  • pytest tests/lora/test_lora_layers_peft.py::LoraSDXLIntegrationTests::test_sdxl_lcm_lora
  • pytest tests/lora/test_lora_layers_peft.py::LoraSDXLIntegrationTests::test_sdv1_5_lcm_lora
  • pytest tests/lora/test_lora_layers_peft.py::LoraSDXLIntegrationTests::test_sdv1_5_lcm_lora_img2img

@patrickvonplaten would it be okay to adjust the tests in this case ?

@patil-suraj
Copy link
Contributor

All slow LCM tests are now passing, I updated the images here https://huggingface.co/datasets/hf-internal-testing/diffusers-images/tree/main/lcm_lora

@patil-suraj patil-suraj merged commit dc21498 into huggingface:main Nov 20, 2023
14 checks passed
@dg845 dg845 deleted the lcm-scheduler-fix-timestep-schedule branch November 21, 2023 04:56
affromero pushed a commit to affromero/diffusers that referenced this pull request Nov 24, 2023
…gingface#5836)

* Change LCMScheduler.set_timesteps to pick more evenly spaced inference timesteps.

* Change inference_indices implementation to better match previous behavior.

* Add num_inference_steps=26 test case to test_inference_steps.

* run CI

---------

Co-authored-by: patil-suraj <surajp815@gmail.com>
yoonseokjin pushed a commit to yoonseokjin/diffusers that referenced this pull request Dec 25, 2023
…gingface#5836)

* Change LCMScheduler.set_timesteps to pick more evenly spaced inference timesteps.

* Change inference_indices implementation to better match previous behavior.

* Add num_inference_steps=26 test case to test_inference_steps.

* run CI

---------

Co-authored-by: patil-suraj <surajp815@gmail.com>
AmericanPresidentJimmyCarter pushed a commit to AmericanPresidentJimmyCarter/diffusers that referenced this pull request Apr 26, 2024
…gingface#5836)

* Change LCMScheduler.set_timesteps to pick more evenly spaced inference timesteps.

* Change inference_indices implementation to better match previous behavior.

* Add num_inference_steps=26 test case to test_inference_steps.

* run CI

---------

Co-authored-by: patil-suraj <surajp815@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

LCM Scheduler isn't perfect in uniformly spreading the timesteps across the full range
5 participants