Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

how get other interfacegan_direction file? #16

Closed
TomatoBoy90 opened this issue Apr 12, 2021 · 8 comments
Closed

how get other interfacegan_direction file? #16

TomatoBoy90 opened this issue Apr 12, 2021 · 8 comments

Comments

@TomatoBoy90
Copy link

your job very great!You provided three interfacegan_direction files:age.pt,pose.pt and smile.pt.
However,if I want to get other direction results,how get other interfacegan_direction file?
And I compared interfacegan with your file, and the result seems to be different. Can't we use a file directly.

@omertov
Copy link
Owner

omertov commented Apr 12, 2021

Hi @TomatoBoy90!
To obtain new editing directions, you should follow the official InterFaceGAN repository's guidelines, in this this issue we wrote how we obtained our 3 editing directions so feel free to check it out (note that you need to use an attribute classification network to label each style vector).

The boundary files from the InterFaceGAN repository were obtained for the pretrained StyleGAN1's latent space, while we trained the boundaries for the pretrained StyleGAN2 (hence the difference you are experiencing).

@TomatoBoy90
Copy link
Author

thank you for your reply quickly.I read InterFaceGAN repository,but seem it need to train svm model,as a supervised learning, it needs label,how to get the label?

@TomatoBoy90
Copy link
Author

Training a SVM model, but how does the machine know which face attribute we need to train?

@omertov
Copy link
Owner

omertov commented Apr 14, 2021

We used a pretrained network to label each StyleGAN image.
For example, after sampling a style vector w, we generated the corresponding image I=Generator(w) and used a pretrained age classification network to obtain the label age_I = AGE_NET(I) to train the SVM for the age direction.

@TomatoBoy90
Copy link
Author

We used a pretrained network to label each StyleGAN image.
For example, after sampling a style vector w, we generated the corresponding image I=Generator(w) and used a pretrained age classification network to obtain the label age_I = AGE_NET(I) to train the SVM for the age direction.
thanks,how many samples should I take for style vector w? How many samples did you take it?

@yuval-alaluf
Copy link

Hi @TomatoBoy90 ,
We used the default procedure used in InterFaceGAN. If I remember correctly, this means we used 500,000 randomly sampled w vectors. We then took the 10,000 samples that got the highest attribute score to be the positive samples and 10,000 samples with the lowest scores to be our negative samples.

@TomatoBoy90
Copy link
Author

thank you reply so detailed and generous.Wish you a happy life

@omertov
Copy link
Owner

omertov commented Apr 27, 2021

Good luck with your experiments!
I am closing the issue :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants