Skip to content

Conversation

@Flova
Copy link
Member

@Flova Flova commented Jan 28, 2025

Proposed changes

This adds an experimental script that is able to distill a diffusion model into one that only does a single step.

Checklist

  • Write documentation
  • Create issues for future work
  • This PR is on our DDLitLab project board

@Flova Flova changed the title Add destillation script for faster inference Add distillation script for faster inference Jan 30, 2025

# Load the learning rate scheduler state if a checkpoint is provided
if args.checkpoint is not None:
if args.checkpoint is not None and False:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This will always result in False.
I assume you meant if args.checkpoint is not None and args.checkpoint is not False in which case I would just do if args.checkpoint:

But I don't really see when it could be False anyway, since it is defined as a str argument.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It was just a hack to deactivate this code path. I will clean this up with another flag. Because loading the learning rate scheduler state when starting from a pretrained model is not desirable (as the end of the schedule was reached during pretrained). In contrast to that you want to resume the schedule if e.g. the training was interrupted.

@Flova Flova merged commit 71e8c70 into main Apr 16, 2025
4 of 5 checks passed
@Flova Flova deleted the feature/destillation branch April 16, 2025 18:26
@github-project-automation github-project-automation bot moved this from In progress to Done in SoccerDiffusion Apr 16, 2025
Flova added a commit that referenced this pull request Apr 21, 2025
Add distillation script for faster inference
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

Status: Done

Development

Successfully merging this pull request may close these issues.

3 participants