Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Demo on remote server #12

Open
VSehwag opened this issue Apr 26, 2020 · 5 comments
Open

Demo on remote server #12

VSehwag opened this issue Apr 26, 2020 · 5 comments

Comments

@VSehwag
Copy link

VSehwag commented Apr 26, 2020

Is it possible to support the functionality where we can run the interactive demo on a remote server (similar to tensorboard)? I have GPUs available only on a headless server, which might be the case for many others.

Thanks.

@podgorskiy
Copy link
Owner

I'm currently looking into the possibility of making a remote version of bimpy. But that requires some work.
Interactive demo meant to run on a local machine with a desktop environment.
You can run it on any machine with a decent gaming GPU. Here #26 it seems even to run on GTX 970.

On the other hand, there is a bunch of scripts (in make_figures folder) for making various figures that can be run on a remote server.

@VSehwag
Copy link
Author

VSehwag commented May 2, 2020

No worries. I managed to distill the interactive_demo.py to a jupyter notebook without the dependency on bimpy. Its not interactive but easily serve the same purpose with some loops and other hacks.

Just one question on reconstructions: Should we expect the reconstruction to be highly similar to original image if the image is from the training set of FFHQ? I know that for real-world images, which we use in the demo, it's not the case. But even for FFHQ images, I found the reconstruction to be quite different (but of course high quality) than original image. In particular, for my project, I am trying to find a set of images, where the reconstruction if very close (in identity) to the original image.

@podgorskiy
Copy link
Owner

Could you please give a link to jupiter notebook? Or you can post a link here: #13 . It could be useful for others.

Reconstructions are expected to be similar. Though, keep in mind that 1024x1024 image is compressed down to a 512 element vector.
It tries to make a reconstruction that is semantically as close as possible. Though, it knows nothing about what features of a human face are important to preserve the identity. Even some very slight change in the face may result in an unrecognizable person. So, yes, people indeed look as different persons. But the overall picture is very similar though.

Enforcing higher priority for important facial features is definitely possible, but it is out of the scope of this work.

@VSehwag
Copy link
Author

VSehwag commented May 3, 2020

The notebook without bimpy dependency is available at https://github.com/VSehwag/ALAE/blob/master/replicate_results.ipynb.

So far, I am quite intrigued by the visualization obtained across a diverse set of images. However, as I mentioned earlier in issue #16, it's still a bit unclear on how the principal directions vectors for attributes are obtained. In particular, given the unsupervised nature of training data, how are we able to find direction for attributes like smile, sunglasses, etc? Would it be possible to have a short discussion on it offline?

@daxiongshu
Copy link

@VSehwag Thank you for posting this. Super useful.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants