Skip to content
This repository was archived by the owner on Feb 7, 2025. It is now read-only.

Conversation

@virginiafdez
Copy link
Contributor

…DM training on the VQVAE.

return reconstruction, quantization_losses

def encode_stage_2_inputs(self, x: torch.Tensor) -> torch.Tensor:
def encode_stage_2_inputs(self, x: torch.Tensor, non_quantized: bool = False) -> torch.Tensor:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we flip the argument name to quantized and default it to True? I think it's easier to understand then the double-negative non_quantized defaulting to False

Copy link
Collaborator

@marksgraham marksgraham left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just one request other all good!

@virginiafdez
Copy link
Contributor Author

Done @marksgraham

@marksgraham marksgraham merged commit 0db685f into main Mar 12, 2024
@marksgraham marksgraham deleted the 473-allow-ldm-model-to-train-on-non-quantized-encoded-outputs-of-the-vq-vae branch March 12, 2024 16:28
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Allow LDM model to train on non-quantized encoded outputs of the VQ-VAE

3 participants