-
Notifications
You must be signed in to change notification settings - Fork 282
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to train w+ space boundary? #38
Comments
You have two options: (1) flatten the codes from the shape (n, 18, 512) to the shape (n, 18*512), then use the reshaped code for boundary training and then reshape it back. In this way, you will get only one boundary. (2) Train 18 boundaries for different layers separately. For this option, please refer to HiGAN for more details. |
Thanks for reply, actually i tried the second method and trained 8 boundaries for the first 8 layers. However ,the performance seems not that good , i'll refer to your HiGAN method, thanks very much! |
Hi, what sample size should I use for training W+ space?
In the paper, you use 20K for stylegan. If I use method two, can I still use 20K for training? |
@WJ-Lai (1) 20K is enough. (2) You can use the same 20K samples for all 18 layers. |
Using the stylegan-encoder project, I got the latent codes as array (n,18,512). However, the training code is for 1d vector input , do i need to separate the latent code into 1d vector?
Thanks a lot!
The text was updated successfully, but these errors were encountered: