-
Notifications
You must be signed in to change notification settings - Fork 61
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Does it support LCM models? #34
Comments
I'm not sure... |
Ive tried modifying it with no success yet Traceback (most recent call last): Could have something to with this? https://huggingface.co/docs/diffusers/main/en/api/schedulers/lcm timesteps (List[int], optional) — Custom timesteps used to support arbitrary spacing between timesteps. If None, then the default timestep spacing strategy of equal spacing between timesteps on the training/distillation timestep schedule is used. If timesteps is passed, num_inference_steps must be None. |
Works now using this, but not much quicker, could even be slower at 13.03s/it (on t4 colab batch 3) diffusion modelvae = AutoencoderKL.from_pretrained("stabilityai/sd-vae-ft-mse", torch_dtype=torch.float16) |
For some reason just before it generated the last two key frames it ran out of memory, just on a batch of 3. Processing sequences: 100% 4/4 [07:36<00:00, 114.22s/it] |
Based on your experiment, then maybe our method is not directly compatible with LCM. |
yes, not a very good resulting video either although no doubt mostly due to the prompt and model. the consistency looks pretty good though especially the background lcmresult2.mp4 |
....
The text was updated successfully, but these errors were encountered: