New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Generated Samples Do Not Resemble Digits for Unsupervised MNIST Training #9
Comments
I did more literature review on this and it seems that normalizing flows learn local correlations as opposed to semantic features. For example, see this link. Also from a different paper from the same author:
|
Hey @siavashk! Generally, RealNVP and glow are not outstanding at modelling MNIST visually. I would expect the results to improve if you train longer though. |
Hey @siavashk to be honest, not really. Flow samples look best on datasets like Celeb-A, and are surprisingly bad on MNIST. For example, here are samples from the iResNet paper, better than what you got above but still not amazing: Our code gives significantly better samples if you use labels though, the supervised and semi-supervised versions compared to the unsupervised samples: |
Thank you for sharing your code. This is an impressive work.
I am trying to train FlowGMM in an unsupervised manner (no labels) on the MNIST dataset. I am using the script under
experiments/train_flows/train_unsup.py
. I am training with default parameters.I am currently at epoch 10 / 100 and the loss is slowly decreasing (see attached image), but the generated samples do not look like hand-written digits (see attached). Is this issue resolved when the training is closer to being finished? Or do I need to run the training script with different arguments?
The text was updated successfully, but these errors were encountered: