Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Plugging vector-quantize-pytorch into taming-transformers #16

Open
tanouch opened this issue Mar 31, 2022 · 2 comments
Open

Plugging vector-quantize-pytorch into taming-transformers #16

tanouch opened this issue Mar 31, 2022 · 2 comments

Comments

@tanouch
Copy link

tanouch commented Mar 31, 2022

Hi,

I noticed your architecture could be plugged within the pipeline from https://github.com/CompVis/taming-transformers. I have proposed a code here (https://github.com/tanouch/taming-transformers) doing that. It enables to properly compare the different features proposed in your repo (Lower codebook dimension, Cosine similarity, Orthogonal regularization loss, etc) with the original formulation.

The code from this repo can be seen in both files

  • taming-transformers/taming/models/vqgan.py
  • taming-transformers/taming/modules/vqvae/quantize.py

As you can see, it is easy to launch a large scale training with your proposed architecture.

I am not sure this issue belongs here or in the taming-transformers repo. However, I thought you might be interested.
Thanks again for your work and these open-sourced repositeries !

@lucidrains
Copy link
Owner

@tanouch thank you! i'm also open to redesigning the base class (or adding an adapter wrapper on top) that would allow it to plug into the taming transformers library more easily, if you think that makes sense

@tanouch
Copy link
Author

tanouch commented Apr 6, 2022

Hi @lucidrains,
Thanks for your response... So the way I understand it is that your current base/main class "class VectorQuantize" is the equivalent of the class "class VectorQuantizer" from (https://github.com/CompVis/taming-transformers/blob/master/taming/modules/vqvae/quantize.py).

This consequently makes the two following classes the same...(and redundant)

Maybe, we could simply add an argument in the constructor of the VQGAN class specifying what quantizer are we going to use...
What do you think ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants