You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I noticed your architecture could be plugged within the pipeline from https://github.com/CompVis/taming-transformers. I have proposed a code here (https://github.com/tanouch/taming-transformers) doing that. It enables to properly compare the different features proposed in your repo (Lower codebook dimension, Cosine similarity, Orthogonal regularization loss, etc) with the original formulation.
As you can see, it is easy to launch a large scale training with your proposed architecture.
I am not sure this issue belongs here or in the taming-transformers repo. However, I thought you might be interested.
Thanks again for your work and these open-sourced repositeries !
The text was updated successfully, but these errors were encountered:
@tanouch thank you! i'm also open to redesigning the base class (or adding an adapter wrapper on top) that would allow it to plug into the taming transformers library more easily, if you think that makes sense
Hi,
I noticed your architecture could be plugged within the pipeline from https://github.com/CompVis/taming-transformers. I have proposed a code here (https://github.com/tanouch/taming-transformers) doing that. It enables to properly compare the different features proposed in your repo (Lower codebook dimension, Cosine similarity, Orthogonal regularization loss, etc) with the original formulation.
The code from this repo can be seen in both files
As you can see, it is easy to launch a large scale training with your proposed architecture.
I am not sure this issue belongs here or in the taming-transformers repo. However, I thought you might be interested.
Thanks again for your work and these open-sourced repositeries !
The text was updated successfully, but these errors were encountered: