Skip to content
This repository was archived by the owner on Feb 7, 2025. It is now read-only.

Conversation

@virginiafdez
Copy link
Contributor

…erer methods, which is then passed to VQVAE encode_stage_2_inputs if autoencoder_model is a VQVAE.

Set this flag randomly during testing (when the autoencoder is a VAE, it shouldn't matter), ran the tests, and ran reformatting.

  • controlnet.py has been changed for reformatting purposes only.

…erer methods, which is then passed to VQVAE encode_stage_2_inputs if autoencoder_model is a VQVAE.

Set this flag randomly during testing (when the autoencoder is a VAE, it shouldn't matter), ran the tests, and ran reformatting.
+ controlnet.py has been changed for reformatting purposes only.
Copy link
Collaborator

@marksgraham marksgraham left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

@marksgraham marksgraham merged commit ef6b7e6 into main Mar 26, 2024
@marksgraham marksgraham deleted the 480-non-quantised-vq-vae-ldm-training-needs-modification-of-the-inferer-to-be-able-to-modify-the-quantised-flag branch March 26, 2024 13:51
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Non-quantised VQ-VAE LDM training needs modification of the inferer to be able to modify the quantised flag.

3 participants