-
Notifications
You must be signed in to change notification settings - Fork 6.6k
Closed
Labels
bugSomething isn't workingSomething isn't workingstaleIssues that haven't received updatesIssues that haven't received updates
Description
Describe the bug
The part of the training script about loading checkpoints. I noticed that when step<resume_step, progress_bar.update(1) happened. Is this correct? In my understanding this causes the progress bar to display incorrectly. The progress bar marks the number of gradient descents: (global_step, max_train_step). It's just skipping samples and not doing gradient descent
Reproduction
don't need
Logs
No response
System Info
diffusersversion: 0.21.0.dev0- Platform: Linux-6.2.0-32-generic-x86_64-with-glibc2.35
- Python version: 3.11.4
- PyTorch version (GPU?): 2.0.1 (True)
- Huggingface_hub version: 0.16.4
- Transformers version: 4.32.0.dev0
- Accelerate version: 0.22.0.dev0
- xFormers version: 0.0.20
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
Who can help?
Metadata
Metadata
Assignees
Labels
bugSomething isn't workingSomething isn't workingstaleIssues that haven't received updatesIssues that haven't received updates