-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
What about the performance on the MNIST dataset #1
Comments
I'm updating the readme now. Here is an output after 100 epochs: If you add use the same noise for every iteration at sampling then the images do not switch between different numbers as much. This is the intended property of 'self-consistency' from the paper. Generations on the same trajectory producing the same sample. However the multi-step sampler algorithm in the paper does add different noise each iteration. Really up to you which you prefer. The GPU consumption is pretty low, I have only 2GB of VRAM. It could probably be a lot better with a smaller NN - I think the UNet I've used is too big. |
Thanks for your quick answers. The performance is pretty good. I find that you use L2 loss. I guess this will also reduce the GPU consumption |
So appreciate of your implementation on MNIST! |
Please give the project a watch or a star if you can. Currently looking for work and a bit of Github exposure might help. |
Sure, I'm glad to help. |
What about the performance on the MNIST dataset? And waht about the GPU comsumption?
The text was updated successfully, but these errors were encountered: