-
Notifications
You must be signed in to change notification settings - Fork 558
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Demo on remote server #12
Comments
I'm currently looking into the possibility of making a remote version of bimpy. But that requires some work. On the other hand, there is a bunch of scripts (in make_figures folder) for making various figures that can be run on a remote server. |
No worries. I managed to distill the interactive_demo.py to a jupyter notebook without the dependency on bimpy. Its not interactive but easily serve the same purpose with some loops and other hacks. Just one question on reconstructions: Should we expect the reconstruction to be highly similar to original image if the image is from the training set of FFHQ? I know that for real-world images, which we use in the demo, it's not the case. But even for FFHQ images, I found the reconstruction to be quite different (but of course high quality) than original image. In particular, for my project, I am trying to find a set of images, where the reconstruction if very close (in identity) to the original image. |
Could you please give a link to jupiter notebook? Or you can post a link here: #13 . It could be useful for others. Reconstructions are expected to be similar. Though, keep in mind that 1024x1024 image is compressed down to a 512 element vector. Enforcing higher priority for important facial features is definitely possible, but it is out of the scope of this work. |
The notebook without bimpy dependency is available at https://github.com/VSehwag/ALAE/blob/master/replicate_results.ipynb. So far, I am quite intrigued by the visualization obtained across a diverse set of images. However, as I mentioned earlier in issue #16, it's still a bit unclear on how the principal directions vectors for attributes are obtained. In particular, given the unsupervised nature of training data, how are we able to find direction for attributes like smile, sunglasses, etc? Would it be possible to have a short discussion on it offline? |
@VSehwag Thank you for posting this. Super useful. |
Is it possible to support the functionality where we can run the interactive demo on a remote server (similar to tensorboard)? I have GPUs available only on a headless server, which might be the case for many others.
Thanks.
The text was updated successfully, but these errors were encountered: