-
-
Notifications
You must be signed in to change notification settings - Fork 13.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Draft] Adding DFaker model plugin #271
Conversation
Thanks for all your work with this. It’s great to see all the free time you guys put into this. |
@Clorr thx for your work, deepfakesclub wrote dfaker model gives best quality. |
Dfaker's full-face conversion is really great but I cant get rid of the mask outside the face and sometimes mask ends just above the chin. A side note, I used histogram match while merging and quality increased dramatically. Note that histogram match here is not perfect, there is already a issue post about that. But I recommend you try it. |
Training is now working! However, it is still highly beta, and also it uses a specific version of extract+alignments.json so you have to rerun extract to try... |
Getting the below error then trying to train Traceback (most recent call last): |
Ah you have to install keras_contrib, it is a new dependency |
I was able to get that installed now getting a syntax error Traceback (most recent call last): |
Sadly it is a bad habit of mine to do a last untested modification before pushing ^^ |
Also I have noticed I am getting OOM memory errors when I run extract, but do not get those when I run the same extraction in master. |
It seems really promising. I managed to resume training with your plugin a previously trained model with original DFaker. I justed renamed the original model files and rerun the extract command on my A & B sets to get a correct alignements.json. To avoid the OOM errors during training, I added a "-bs 16" argument (I have a GTX 1080 Ti), it might work with a batch size of 32, i haven't tried it yet. Unfortunately, I haven't succeeded yet to convert because I keep getting the same error on every face I try to convert.
Do you have an idea where that error could be coming from ? |
Thanks for the feedback! |
No I am getting the OOM error when doing face extraction not training. Unless I am missing something there is no batch size for extraction. |
@DLSauron there is nothing really specific to extract here (just a modified call) so your OOM is likely not related to this PR. Can you confirm? |
I can confirm if I run this command with the master branch it runs through all 500+ images just fine, but if I run it with this pull request I get OOM errors. I cannot be sure, but was the resize code that iperov added removed from the extractor? python faceswap.py extract -i "D:\Fakes\Data\DataSet_A" -o "D:\Fakes\Data\DataSet_A\aligned" -D cnn -r off Traceback (most recent call last): |
The only thing that may change between master and here is that there is no (If you want to be sure, you can enable the verbose mode in the master you should see the info) |
No even with the verbose flag in master I do not get any memory errors python faceswap.py extract -i "D:\Fakes\Data\Dataset_A" -o "D:\Fakes\Data\Dataset_A\aligned" -D cnn -r off -v Images found: 544 Note: Done! |
Ok, thanks for trying. Still strange though... As the alignments.json changed you have to extract again, so the only option I have for you now is to use 'hog' |
I now pushed a convert |
I preformed a fresh extract of 2 datasets one with hog the other I was able to do with cnn. I moved to alignments.json into the folders with the extracted faces and then tried to train. Not sure if I am missing something. python faceswap.py train -A "D:\Fakes\Data\DataSet_A\cnn" -B "D:\Fakes\Data\DataSet_B\hog" -m "d:\Fakes\Models\DFaker" -p -s 100 -bs 20 -t DFaker Traceback (most recent call last): |
Ah ok, the align_eyes added new params... This should be fixed, but I can't test as i have a train in progress |
Yup I was able to start training now. |
@DLSauron you were right I have a commit with a modification faces_detect.py in this PR. This shouldn't be here... |
Thanks ruah1984. It's all working now. What is the 'warped' window in training supposed to be for? I'm assuming it's applying the model of the face back onto the image so the user can judge whether the model is any good? ie. clear and faultless images mean the models are effective and distorted images mean the model is not complete? |
@clouders1111 i yet trying this model, not sure what will be the result. you can share once you complete @Clorr have fork this repo. you can refer here hi @Clorr the double pass is for what purpose??
|
@ruah1984 I got the scripts to completely work and pump out some converted images. You'll probably beat me to producing decent content as I only had the training going for 12 hrs and won't be able to spare the pc to train for awhile, I'll be back to this later. But fyi the faces produced after only 12 hours of training on a 1080 at 16 batch were excellent but they didn't seem to mask correctly over data A. As in it looks like the faces haven't 'stretched' correctly over the data A faces. I assume 12 hours isn't enough time and/or not enough training data, being that DFaker model is much more complex. I'll come back to this later, at this point I was only trying to get all the scripts working. |
@iperov wonderful, I'd love to see the bugs, any chance you could issue a PR or have an annotated fork of where you found issues? |
@dfaker hard to explain due to lack of my english
I fixed it in my upcoming global refactoring the best brand new platform for deepfaceswapping and programmers :D I will release whole fork after several tests. Check my topics in playground. |
@iperov sounds good, the failings of the warping were one of the things I was least pleased with, a delaunay triangulation with fixed background points sounds like a perfect solution! I wish my working notes and other suggestions from the wiki hadn't been lost when the subreddit wen down. but they were mostly focused on refinements to the src->dst transformation which by the sounds of it you can simplify. If you're saying you merged the two output channels into a single 4 channel output, I'd warn caution, in my original tests that tended to mess with the other channels in unexpected ways. |
but all fine even on 2gb original model |
can it merge to the master?? |
I would think not until the extraction problem is fixed. |
@dfaker
works same :) but model size less. |
@dfaker hm but mask result not same. So i have to include mask decoder. |
has the extraction problem fix? |
Any news on this pull request? |
it seems this pull request was abandoned. |
I think @andenixa is working on a new version of this |
@torzdf @agilebean |
I assume the DFaker plug in has been abandoned? |
Nice, I can't wait to see it. Has anything been produced a model that can produce as much as the face as DFaker? Nothing I've trialed quite compares to it. |
Superseded by PR #572 |
Note: This is not working!
I'm in the process of adding DFaker model as plugin. It behaves quite differently on some parts so it is still work in progress. Note that the image load is not plugged in, so this PR won't even launch. But if you are interested, have a look and propose some solution.
I should be able to finish this in a couple of days if everything goes fine (extract and convert may be harder than i expect so I'm still not sure)