Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

could you elaborate more on the training configurations you used? #26

Open
alcanunsal opened this issue Mar 3, 2022 · 3 comments
Open

Comments

@alcanunsal
Copy link

Hi, firstly, thanks for sharing this amazing project, it looks really promising!

I have some questions regarding the training configurations you used for your checkpoints. I'm new to deep learning and GANs, therefore if you could answer my questions as simple as possible I would be really grateful.

I was wondering how many epochs have you trained and how long it took in order to get to the checkpoints you achieved and if you think it's possible to improve the reconstruction result (sometimes the face in the target video and source picture is not exactly aligned, or there are both source and target eyebrows visible in the end result, or when the source person is bearded the beard is only half transferred to the target video). In addition, could you share the weights you used for adversarial loss, attributes, identity loss, reconstruction loss and eye loss and in what manner I could adjust these values in order to put more weight into source person and their eyes if possible?

Sorry for bombarding you with questions, thanks in advance!

@AlexanderGroshev
Copy link
Collaborator

Hi, @alcanunsal, sorry for the late response!

We trained model in 2 steps. For first step we did not use eye loss and set other weights as follows: weight_adv 1 --weight_attr 10 --weight_id 15 --weight_rec 10 . For second training part we changed identity loss weight to 70 and set eye loss as --weight_eyes 1200. To make source person attributes stronger you should increase weights for id and eye losses.
We trained the model for 8 epochs and 2 epochs, respectively.

@LyazS
Copy link

LyazS commented Mar 18, 2022

@AlexanderGroshev could u please tell me the train time? I set batch size=32 and run on V100, it takes 16 hours for one epoch. 8+2=10 epochs mean that it will take 6 days to finish.
I'd really appreciate it if u can upload training logs.

@alcanunsal
Copy link
Author

Thanks a lot for the responses, @AlexanderGroshev , they helped a lot!
I have another question in mind about the dataset. In the parameters in train.py, we have --vgg, --same_person and --same_identity. I would like to train using a combination of VGGFace2, CelebAHQ and FFHQ datasets. Should I have --vgg parameter True? If so, how did you obtain the --same_person and --same_identity parameters? Are these statistics about the dataset you used or some probability you set yourself in order to perform swap with same target and source images during training? Thanks in advance

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants