Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to find the attribute direction? #5

Closed
fuxuliu opened this issue Feb 28, 2019 · 6 comments
Closed

How to find the attribute direction? #5

fuxuliu opened this issue Feb 28, 2019 · 6 comments

Comments

@fuxuliu
Copy link

fuxuliu commented Feb 28, 2019

Hey, Puzer, you did a nice work. I am working on generating more meaningful face images and controlling the the attributes by myself. And I found that you got the attribute direction, like smiling, age, gender. How do I get more attribute direction, like hair, color of skin or other facial expressions and so on. Do you have some script or any way? Thank you.

@Puzer
Copy link
Owner

Puzer commented Feb 28, 2019

Hey, @Gary-Deeplearning and thanks!
You can find my more examples here Learn_direction_in_latent_space.ipynb.
I think that this noteook is self-explanatory, by using similar approach you can find your own directions.

@fuxuliu
Copy link
Author

fuxuliu commented Mar 1, 2019

@Puzer Yep, Thank you, I will check the notebook. And from the Readme, you said, New scripts for finding your own directions will be realised soon. Will this be released?

@fuxuliu
Copy link
Author

fuxuliu commented Mar 1, 2019

@Puzer And, Why used the front 8 layers when you moved the latent direction?
new_latent_vector[:8] = (latent_vector + coeff*direction)[:8]
Is it based on this result: which latent layer is most useful for predicting gender?
image

@pender
Copy link

pender commented Mar 2, 2019

@Puzer And, Why used the front 8 layers when you moved the latent direction?
new_latent_vector[:8] = (latent_vector + coeff*direction)[:8]
Is it based on this result: which latent layer is most useful for predicting gender?
image

I'm confused by this too. It looks like @Puzer is training the linear regression on dlatents that he obtained from the mapping network, but I think the mapping network just broadcasts a single 512-length dlatent vector up to an [18, 512] tensor, i.e. all 18 layers of the dlatent tensor should be identical. So, I think you'd get the same result by training only on a single layer of the dlatent tensor, assuming you generated your training data by feeding qlatent vectors through the mapping network (as opposed to using @Puzer 's script to derive dlatent tensors from real images). I assume the "accuracy vs layer" graph above is just showing noise produced by the linear regression.

@simplebeauty
Copy link

@Puzer Hi, can you release the full script for finding smiling, age, gender? I am a beginner to ML. I checked Learn_direction_in_latent_space.ipynb and still fill confused. Thanks.

@progmars
Copy link

progmars commented Sep 11, 2019

I just found a project that allows controlling a bunch of StyleGAN features through UI knobs:
https://github.com/SummitKwan/transparent_latent_gan

Being a total newbie at machine learning, I'm wondering, what are the main differences between Puzer's approach and transparent_latent_gan?

Another issue - transparent_latent_gan is using the smaller CelebA dataset, so that might be the reason why sometimes its features get entangled too much and StyleGAN gets stuck when you try to lock and combine too many features (try to adjust the sliders to create an old, bald, non-smiling, bearded man with eyeglasses).

I'm wondering if Puzer's approach could work better? I tried current age direction and noticed that at some point it tries to add glasses and beard. I guess, those two features got entangled with age and I'm not sure what could be done to disentangle them - I hope to get only wrinkles and receding hairline for age direction.

Also, when encoding images, I found out that sometimes align works incorrectly cropping away top of a head. And for some of my images, the optimal encoder combination seems to be learning rate of 4.0 and image size of 512. With default settings (learning rate of 1 and image size 256) it got some tricky images (old black&white photos) or complex scenarios (large mustache over lips) totally corrupted, and for some less complex images it lost enough tiny details to make the photo feel too "uncanny" to consider to be exact match, especially, for younger people who don't have enough deep wrinkles or beards and also when images are shot with lots of light, so those tiny details and shadows matter a lot.

Of course, 4.0 @ 512 can take pretty long time to train, and sometimes 1000 iterations are not enough. With one specific tricky image I went as far as to 4000 iterations to get satisfactory results, while for some other images such high learning rate + iterations led to washed-out images (overfitting?).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants