Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Decoder is fixed after traning? #9

Closed
xiaoiker opened this issue Dec 23, 2021 · 3 comments
Closed

Decoder is fixed after traning? #9

xiaoiker opened this issue Dec 23, 2021 · 3 comments

Comments

@xiaoiker
Copy link

Thanks for this awesome work. Could I ask a question about the training part?
I was wondering after the fisrt step, will you train the decoder again in the second step? Coz it seems the upper boundry of the performance is determined by how good the decoder is, right? Does this also mean if we build a stronger autoencoder, the performance can be also improved? Many thanks.

@samb-t
Copy link
Owner

samb-t commented Jan 5, 2022

Hi, sorry for the delay, I’ve been taking a break over Christmas/New Year. The decoder is only trained in the first step as is typical with Vector-Quantized models and so yes, it is very true that the performance is bounded by the decoder performance. Since, in terms of FID, we are relatively close to AE reconstruction quality, it is definitely important to improve the decoder. Some recent papers have focused on this part, using Vision Transformers and DDPMs. Hope this helps.

@xiaoiker
Copy link
Author

xiaoiker commented Jan 6, 2022

@samb-t Thanks a lot, This really helps.

@xiaoiker xiaoiker closed this as completed Jan 6, 2022
@xiaoiker
Copy link
Author

xiaoiker commented Jan 4, 2023

Gongratulations! Just realize that your paper is accepted by ECCV. I noticed your paper early before CVPR2022, and found a later paper, latent DPM, shares the same idea with your paper but got much more attention than yours. Sad but research do need some luck at sometime. Anyway, glad it was accepted!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants