Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Draft] Adding DFaker model plugin #271

Closed
wants to merge 6 commits into from
Closed

Conversation

Clorr
Copy link
Contributor

@Clorr Clorr commented Mar 10, 2018

Note: This is not working!

I'm in the process of adding DFaker model as plugin. It behaves quite differently on some parts so it is still work in progress. Note that the image load is not plugged in, so this PR won't even launch. But if you are interested, have a look and propose some solution.

I should be able to finish this in a couple of days if everything goes fine (extract and convert may be harder than i expect so I'm still not sure)

@kellurian
Copy link

Thanks for all your work with this. It’s great to see all the free time you guys put into this.

@iperov
Copy link
Contributor

iperov commented Mar 12, 2018

@Clorr thx for your work, deepfakesclub wrote dfaker model gives best quality.

@Jack29913
Copy link
Contributor

Dfaker's full-face conversion is really great but I cant get rid of the mask outside the face and sometimes mask ends just above the chin. A side note, I used histogram match while merging and quality increased dramatically. Note that histogram match here is not perfect, there is already a issue post about that. But I recommend you try it.

@Clorr
Copy link
Contributor Author

Clorr commented Mar 12, 2018

Training is now working!

However, it is still highly beta, and also it uses a specific version of extract+alignments.json so you have to rerun extract to try...

@NagashSzarekh
Copy link

Getting the below error then trying to train

Traceback (most recent call last):
File "c:\users\DLSauron\appdata\local\programs\python\python36\Lib\threading.py", line 916, in _bootstrap_inner
self.run()
File "c:\users\DLSauron\appdata\local\programs\python\python36\Lib\threading.py", line 864, in run
self._target(*self._args, **self._kwargs)
File "C:\Users\DLSauron\source\repos\faceswap\scripts\train.py", line 147, in processThread
model = PluginLoader.get_model(trainer)(get_folder(self.arguments.model_dir), self.arguments.gpus)
File "C:\Users\DLSauron\source\repos\faceswap\plugins\PluginLoader.py", line 14, in get_model
return PluginLoader.import("Model", "Model{0}".format(name))
File "C:\Users\DLSauron\source\repos\faceswap\plugins\PluginLoader.py", line 23, in import
module = import(name, globals(), locals(), [], 1)
File "C:\Users\DLSauron\source\repos\faceswap\plugins\Model_DFaker_init
.py", line 6, in
from .Model import Model
File "C:\Users\DLSauron\source\repos\faceswap\plugins\Model_DFaker\Model.py", line 13, in
from keras_contrib.losses import DSSIMObjective
ModuleNotFoundError: No module named 'keras_contrib'

@Clorr
Copy link
Contributor Author

Clorr commented Mar 12, 2018

Ah you have to install keras_contrib, it is a new dependency

@NagashSzarekh
Copy link

I was able to get that installed now getting a syntax error

Traceback (most recent call last):
File "c:\users\DLSauron\appdata\local\programs\python\python36\Lib\threading.py", line 916, in _bootstrap_inner
self.run()
File "c:\users\DLSauron\appdata\local\programs\python\python36\Lib\threading.py", line 864, in run
self._target(*self._args, **self._kwargs)
File "C:\Users\DLSauron\source\repos\faceswap\scripts\train.py", line 147, in processThread
model = PluginLoader.get_model(trainer)(get_folder(self.arguments.model_dir), self.arguments.gpus)
File "C:\Users\DLSauron\source\repos\faceswap\plugins\PluginLoader.py", line 14, in get_model
return PluginLoader.import("Model", "Model{0}".format(name))
File "C:\Users\DLSauron\source\repos\faceswap\plugins\PluginLoader.py", line 23, in import
module = import(name, globals(), locals(), [], 1)
File "C:\Users\DLSauron\source\repos\faceswap\plugins\Model_DFaker_init
.py", line 7, in
from .Trainer import Trainer
File "C:\Users\DLSauron\source\repos\faceswap\plugins\Model_DFaker\Trainer.py", line 95
def show_warped(self, warped_A, warped_B)
^
SyntaxError: invalid syntax

@Clorr
Copy link
Contributor Author

Clorr commented Mar 13, 2018

Sadly it is a bad habit of mine to do a last untested modification before pushing ^^
Here I forgot the ':' . I pushed a fix but the code is still untested as I have not my test machine at hand....

@NagashSzarekh
Copy link

Also I have noticed I am getting OOM memory errors when I run extract, but do not get those when I run the same extraction in master.

@CleepStimb
Copy link

It seems really promising. I managed to resume training with your plugin a previously trained model with original DFaker. I justed renamed the original model files and rerun the extract command on my A & B sets to get a correct alignements.json.

To avoid the OOM errors during training, I added a "-bs 16" argument (I have a GTX 1080 Ti), it might work with a batch size of 32, i haven't tried it yet.

Unfortunately, I haven't succeeded yet to convert because I keep getting the same error on every face I try to convert.

Failed to convert image: C:\Projects\dfaker\data\video\imagename1511.jpg. Reason: The model expects 2 arrays, but only received one array. Found: array with shape (1, 64, 64, 3)

Do you have an idea where that error could be coming from ?

@Clorr
Copy link
Contributor Author

Clorr commented Mar 14, 2018

Thanks for the feedback!
Also you are right OOM comes from a too high batch-size.
For convert it is not yet implemented, as this is a draft, I'm still on the extract + preprocess part.
I'll keep you posted for the convert (note you should be able to convert with previous DFaker scripts but using same h5 files, I think)

@NagashSzarekh
Copy link

No I am getting the OOM error when doing face extraction not training. Unless I am missing something there is no batch size for extraction.

@Clorr
Copy link
Contributor Author

Clorr commented Mar 14, 2018

@DLSauron there is nothing really specific to extract here (just a modified call) so your OOM is likely not related to this PR. Can you confirm?

@NagashSzarekh
Copy link

I can confirm if I run this command with the master branch it runs through all 500+ images just fine, but if I run it with this pull request I get OOM errors. I cannot be sure, but was the resize code that iperov added removed from the extractor?

python faceswap.py extract -i "D:\Fakes\Data\DataSet_A" -o "D:\Fakes\Data\DataSet_A\aligned" -D cnn -r off

Traceback (most recent call last):
File "faceswap.py", line 29, in
arguments.func(arguments)
File "C:\Users\DLSauron\source\repos\faceswap\lib\cli.py", line 87, in process_arguments
self.process()
File "C:\Users\DLSauron\source\repos\faceswap\scripts\extract.py", line 106, in process
filename, faces = self.processFiles(filename)
File "C:\Users\DLSauron\source\repos\faceswap\scripts\extract.py", line 113, in processFiles
return filename, self.handleImage(image, filename)
File "C:\Users\DLSauron\source\repos\faceswap\scripts\extract.py", line 131, in handleImage
process_faces = [(idx, face) for idx, face in faces]
File "C:\Users\DLSauron\source\repos\faceswap\scripts\extract.py", line 131, in
process_faces = [(idx, face) for idx, face in faces]
File "C:\Users\DLSauron\source\repos\faceswap\lib\cli.py", line 164, in get_faces
for face in faces:
File "C:\Users\DLSauron\source\repos\faceswap\lib\faces_detect.py", line 6, in detect_faces
face_locations = face_recognition.face_locations(frame, model=model)
File "C:\Users\DLSauron\Envs\faceswap\lib\site-packages\face_recognition\api.py", line 114, in face_locations
return [_trim_css_to_bounds(_rect_to_css(face.rect), img.shape) for face in _raw_face_locations(img, number_of_times_to_upsample, "cnn")]
File "C:\Users\DLSauron\Envs\faceswap\lib\site-packages\face_recognition\api.py", line 98, in _raw_face_locations
return cnn_face_detector(img, number_of_times_to_upsample)
RuntimeError: Error while calling cudaMalloc(&data, n) in file C:\Users\DLSauron\AppData\Local\Temp\pip-build-0mfs1ycn\dlib\dlib\dnn\cuda_data_ptr.cpp:28. code: 2, reason: out of memory

@Clorr
Copy link
Contributor Author

Clorr commented Mar 14, 2018

The only thing that may change between master and here is that there is no try : catch: around extraction. But I thought OOM was a warning, while your post suggest it is an exception. It means that now, your extract is stopping while before the error was silent. anyhow this part is not meant to be in the final release but thanks for pointing out.

(If you want to be sure, you can enable the verbose mode in the master you should see the info)

@NagashSzarekh
Copy link

NagashSzarekh commented Mar 14, 2018

No even with the verbose flag in master I do not get any memory errors

python faceswap.py extract -i "D:\Fakes\Data\Dataset_A" -o "D:\Fakes\Data\Dataset_A\aligned" -D cnn -r off -v
C:\Users\DLSauron\Envs\faceswap\lib\site-packages\h5py_init_.py:36: FutureWarning: Conversion of the second argument of issubdtype from float to np.floating is deprecated. In future, it will be treated as np.float64 == np.dtype(float).type.
from ._conv import register_converters as _register_converters
Using TensorFlow backend.
Input Directory: D:\Fakes\Data\Dataset_A
Output Directory: D:\Fakes\Data\Dataset_A\aligned
Filter: filter.jpg
Using json serializer
Starting, this may take a while...
Loading Extract from Extract_Align plugin...
0%| | 0/544 [00:00<?, ?it/s]Info: initializing keras model...
2018-03-14 10:09:41.059642: I C:\tf_jenkins\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\platform\cpu_feature_guard.cc:137] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX AVX2
2018-03-14 10:09:41.063793: I C:\tf_jenkins\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\gpu\gpu_device.cc:1105] Found device 0 with properties:
name: TITAN Xp major: 6 minor: 1 memoryClockRate(GHz): 1.582
pciBusID: 0000:01:00.0
totalMemory: 12.00GiB freeMemory: 8.09GiB
2018-03-14 10:09:41.067797: I C:\tf_jenkins\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\gpu\gpu_device.cc:1195] Creating TensorFlow device (/device:GPU:0) -> (device: 0, name: TITAN Xp, pci bus id: 0000:01:00.0, compute capability: 6.1)
WARNING:tensorflow:From C:\Users\DLSauron\Envs\faceswap\lib\site-packages\keras\backend\tensorflow_backend.py:1349: calling reduce_mean (from tensorflow.python.ops.math_ops) with keep_dims is deprecated and will be removed in a future version.
Instructions for updating:
keep_dims is deprecated, use keepdims instead
67%|█████████████████████████████████████████████████████▏ | 362/544 [01:18<00:39, 4.63it/s]Warning: No faces were detected.
100%|████████████████████████████████████████████████████████████████████████████████| 544/544 [01:53<00:00, 4.81it/s]
Alignments filepath: D:\Fakes\Data\Dataset_A\alignments.json
Writing alignments to: D:\Fakes\Data\Dataset_A\alignments.json


Images found: 544
Faces detected: 576


Note:
Multiple faces were detected in one or more pictures.
Double check your results.


Done!

@Clorr
Copy link
Contributor Author

Clorr commented Mar 14, 2018

Ok, thanks for trying. Still strange though... As the alignments.json changed you have to extract again, so the only option I have for you now is to use 'hog'

@Clorr
Copy link
Contributor Author

Clorr commented Mar 14, 2018

I now pushed a convert

@NagashSzarekh
Copy link

NagashSzarekh commented Mar 14, 2018

I preformed a fresh extract of 2 datasets one with hog the other I was able to do with cnn. I moved to alignments.json into the folders with the extracted faces and then tried to train. Not sure if I am missing something.

python faceswap.py train -A "D:\Fakes\Data\DataSet_A\cnn" -B "D:\Fakes\Data\DataSet_B\hog" -m "d:\Fakes\Models\DFaker" -p -s 100 -bs 20 -t DFaker

Traceback (most recent call last):
File "c:\users\DLSauron\appdata\local\programs\python\python36\Lib\threading.py", line 916, in _bootstrap_inner
self.run()
File "c:\users\DLSauron\appdata\local\programs\python\python36\Lib\threading.py", line 864, in run
self._target(*self._args, **self._kwargs)
File "C:\Users\DLSauron\source\repos\faceswap\scripts\train.py", line 153, in processThread
trainer = trainer(model, images_A, images_B, self.arguments.batch_size, self.arguments.perceptual_loss)
File "C:\Users\DLSauron\source\repos\faceswap\plugins\Model_DFaker\Trainer.py", line 68, in init
images_A, landmarks_A = load_images_aligned(fn_A[:minImages])
File "C:\Users\DLSauron\source\repos\faceswap\plugins\Model_DFaker\utils.py", line 28, in load_images_aligned
mat = get_align_mat( detected_face )
TypeError: get_align_mat() missing 2 required positional arguments: 'size' and 'should_align_eyes'

@Clorr
Copy link
Contributor Author

Clorr commented Mar 14, 2018

Ah ok, the align_eyes added new params... This should be fixed, but I can't test as i have a train in progress

@NagashSzarekh
Copy link

Yup I was able to start training now.

@Clorr
Copy link
Contributor Author

Clorr commented Mar 14, 2018

@DLSauron you were right I have a commit with a modification faces_detect.py in this PR. This shouldn't be here...

@iperov
Copy link
Contributor

iperov commented Mar 14, 2018

I rerun extract, but train try to load alignments from aligned folder??
explorer_2018-03-14_21-29-37

@clouders1111
Copy link

Thanks ruah1984. It's all working now. What is the 'warped' window in training supposed to be for? I'm assuming it's applying the model of the face back onto the image so the user can judge whether the model is any good? ie. clear and faultless images mean the models are effective and distorted images mean the model is not complete?

@ruah1984
Copy link

ruah1984 commented Apr 6, 2018

@clouders1111 i yet trying this model, not sure what will be the result. you can share once you complete @Clorr have fork this repo. you can refer here
https://github.com/Clorr/faceswap/tree/dev/dfaker

hi @Clorr

the double pass is for what purpose??

parser.add_argument("--double-pass",
action="store_true",
dest="double_pass",
default=False,
help="Double Pass (DFaker converter only)")

@clouders1111
Copy link

@ruah1984 I got the scripts to completely work and pump out some converted images. You'll probably beat me to producing decent content as I only had the training going for 12 hrs and won't be able to spare the pc to train for awhile, I'll be back to this later. But fyi the faces produced after only 12 hours of training on a 1080 at 16 batch were excellent but they didn't seem to mask correctly over data A. As in it looks like the faces haven't 'stretched' correctly over the data A faces. I assume 12 hours isn't enough time and/or not enough training data, being that DFaker model is much more complex. I'll come back to this later, at this point I was only trying to get all the scripts working.

@dfaker
Copy link
Contributor

dfaker commented Apr 7, 2018

@iperov wonderful, I'd love to see the bugs, any chance you could issue a PR or have an annotated fork of where you found issues?

@iperov
Copy link
Contributor

iperov commented Apr 7, 2018

@dfaker hard to explain due to lack of my english

  1. your face warper doesnt warps to points of dst face exactly.
  2. your points randomizer can produce mash with background

I fixed it in my upcoming global refactoring the best brand new platform for deepfaceswapping and programmers :D
I use DelaunayTriangles for morph , and points
Also dst->src warp not needed, because why overfit NN with unusable data, if I swap only src->dst, so I just randomizing dst with itself.
Also I dont use DSSIM loss, because mask layer is part of input and output.

I will release whole fork after several tests.

Check my topics in playground.
deepfakes/faceswap-playground#122
deepfakes/faceswap-playground#120
deepfakes/faceswap-playground#125
deepfakes/faceswap-playground#126

@dfaker
Copy link
Contributor

dfaker commented Apr 7, 2018

@iperov sounds good, the failings of the warping were one of the things I was least pleased with, a delaunay triangulation with fixed background points sounds like a perfect solution!

I wish my working notes and other suggestions from the wiki hadn't been lost when the subreddit wen down. but they were mostly focused on refinements to the src->dst transformation which by the sounds of it you can simplify.

If you're saying you merged the two output channels into a single 4 channel output, I'd warn caution, in my original tests that tended to mess with the other channels in unexpected ways.

@iperov
Copy link
Contributor

iperov commented Apr 7, 2018

but all fine even on 2gb original model

@ruah1984
Copy link

can it merge to the master??

@NagashSzarekh
Copy link

I would think not until the extraction problem is fixed.

@iperov
Copy link
Contributor

iperov commented Apr 14, 2018

@dfaker
my dssim loss which work with embedded alpha mask

class penalized_loss(object):
            def __init__(self,lossFunc):
                self.lossFunc = lossFunc
                
            def __call__(self,y_true, y_pred):            
                
                tr, tg, tb, ta = tf.split(y_true, 4, 3 )
                pr, pg, pb, pa = tf.split(y_pred, 4, 3 )
   
                t = tf.concat([tr, tg, tb], 3)*ta
                p = tf.concat([pr, pg, pb], 3)*ta

                return self.lossFunc (t,p)

works same :) but model size less.
Just now launched it with IAE 128 and super improved df full face match warper

@iperov
Copy link
Contributor

iperov commented Apr 14, 2018

@dfaker hm but mask result not same. So i have to include mask decoder.

@ruah1984
Copy link

has the extraction problem fix?

@torzdf torzdf changed the base branch from master to staging April 22, 2018 10:57
@torzdf torzdf added the work in progress For demonstration/feedback, not yet ready to merge label Apr 22, 2018
@luiscosio
Copy link
Contributor

Any news on this pull request?

@agilebean
Copy link

it seems this pull request was abandoned.
is anybody working to resolve the conflicts?
it would be great as the dfaker plugin outperformed faceswap, so this would merge would substantially close the gap!

@torzdf
Copy link
Collaborator

torzdf commented Jun 21, 2018

I think @andenixa is working on a new version of this

@andenixa
Copy link
Contributor

andenixa commented Jun 21, 2018

@torzdf @agilebean
Quite so, I am working at bringing DF model to faceswap. I am not sure if outperformed term applies here. DF is a full-face model which faceswap is lacking. I would't presume it necessarily has better quality, perhaps better result because of more face coverage. From my experience deepfakes.clup is isn't inhabited by the most technically savvy people.

@clouders1111
Copy link

I assume the DFaker plug in has been abandoned?

@torzdf
Copy link
Collaborator

torzdf commented Nov 30, 2018

This version has.

A new port is in development and has been tested working, but I'm a little short of time to finish it.

It will be coming though, hopefully before the end of the year

Dfaker training in FS:
dfaker in faceswap

@clouders1111
Copy link

Nice, I can't wait to see it. Has anything been produced a model that can produce as much as the face as DFaker? Nothing I've trialed quite compares to it.

@torzdf
Copy link
Collaborator

torzdf commented Jan 2, 2019

Superseded by PR #572

@torzdf torzdf closed this Jan 2, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
work in progress For demonstration/feedback, not yet ready to merge
Projects
None yet
Development

Successfully merging this pull request may close these issues.