Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The training code is not working, face does not swap, it only tries to restore the same as in the video #15

Closed
netrunner-exe opened this issue Nov 13, 2021 · 7 comments

Comments

@netrunner-exe
Copy link

The training is not working correctly. The face does not swap, it only tries to restore the same as in the video. Looks like this training implementation is not working, or this repo its a kinda bad joke for people, who want to train his own model for SimSwap...
I train with different dataset size - 512, 224 and 104 but result is same - face is not swaping... Maybe did something wrong and did not correctly train model? I prepare dataset from celebA with make_dataset.py and train with command provided in readme.

In general, there is at least someone who managed to train the model and get the result and not just like mine?

Or can someone tell me why this happened? I don't think the author of this repo reads the issues and will answer my message.

Please write about your results and if this training code does not really work - write so that other users do not waste their time.

培训工作不正常。脸部不会交换,它只会尝试恢复与视频中的相同。看起来这个训练实现不起作用,或者这个 repo 对于想要为 SimSwap 训练他自己的模型的人来说是一个糟糕的笑话......
我使用不同的数据集大小进行训练 - 512、224 和 104 但结果相同 - 面部没有交换......也许做错了什么并且没有正确训练模型?我使用 make_dataset.py 从 celebA 准备数据集,并使用自述文件中提供的命令进行训练。
一般来说,至少有人设法训练模型并获得结果,而不仅仅是像我这样的?
或者有人可以告诉我为什么会这样?我认为这个 repo 的作者不会阅读这些问题,也不会回答我的信息。
请写下您的结果,如果此培训代码不起作用 - 写下来以免其他用户浪费时间。

frame_0000000

@netrunner-exe
Copy link
Author

frame_0000000

@zhangyunming
Copy link

刚开始训练的时候好像有些效果 但是后面越训好像跟没换一样

@tiansw1
Copy link

tiansw1 commented Nov 18, 2021

go to util.videoswap.py and set video_swap function crop_size=512, it worked on my project and successfully swapped the face

@netrunner-exe
Copy link
Author

netrunner-exe commented Nov 18, 2021

Thank you very much for your answer, I no longer hoped that I could get a positive result! I will try! You have the opportunity to share me your pretrained model? In Google Colab I can train only 400 max and that takes a very long time...I would be very grateful to you for that!

@netrunner-exe
Copy link
Author

I did everything as you said, but there is no result. The face does not swap in the video. I changed everywhere crop_size = 224 to crop_size = 512. And also in reverse2original.py i changed target_mask = cv2.resize (tgt_mask, (224, 224)) to 512, 512, if you don't change these values - ​​you will get ValueError: operands could not be broadcast together with shapes (512,512,3) ( 224,224.1). How did you train the model? How many epoch, which faceset? Share your pretrained model, if it really works.

@a312863063
Copy link
Owner

嗨,换脸确实不是那么好训,如果训到后面变成没有换的效果了意味着Rec的loss权重要调低一点。我这份代码的权重是继承了论文里的配置(针对224分辨率的),所以换成512的分辨率时loss的权重需要做调整。
Hey, changing faces is indeed not that good for training. If the training becomes no change after training, it means that Rec's loss weight is to be lowered. The weight of my code inherits the configuration in the paper (for 224 resolution), so it needs to be adjusted when changing to a 512 resolution.

@luoww1992
Copy link

@netrunner-exe
i have the same question, have you solved it? if yes, how ? can you share your training args?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants