Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Improved AutoEncoder model. #251

Merged
merged 5 commits into from Mar 11, 2018

Conversation

@acsaga
Copy link
Contributor

acsaga commented Mar 8, 2018

Add a new autoencoder model, I call it Improved AutoEncoder (IAE).
In the same training time, this model generates better(sharper and natural-looking) faces than the original model.

In IAE, autoencoder_A and autoencoder_B use the same encoder and decoder.
Between the encoder and decoder, IAE explicitly adds intermidiate layers for latent varibles.

An (ugly) illustration for the model structure:
image

@Clorr

This comment has been minimized.

Copy link
Collaborator

Clorr commented Mar 8, 2018

Hi @acsaga
Thank you for this awesome PR, it is always a pleasure to see a new model emerge!
I'm going to test it ASAP. Just a remark, we are moving to a new folder structure in the latest commits. You can check my repo to see how it is now. Recommended structure would be for you:

  • Create Model_IAE folder in plugins
  • Create init.py like this one
  • Put your files in this folder

Let me know if you need help or if you want me to do this for you

@Clorr

This comment has been minimized.

Copy link
Collaborator

Clorr commented Mar 8, 2018

Thanks for doing the changes. however I think you forgot to add the files. The .gitignore is buggy so you have to add the new files manually. I don't know which tool you are using, but with git cli, you have to do git add -f *.py from the folder where your files are ;-)

@acsaga

This comment has been minimized.

Copy link
Contributor Author

acsaga commented Mar 8, 2018

Thanks for your tips :)
Now it should be fine.

@Clorr

This comment has been minimized.

Copy link
Collaborator

Clorr commented Mar 8, 2018

Seems perfect to me ;-) I can merge this right now or do you want to wait for some feedback? (If I merge, I suggest you add an issue so that others are aware of this new plugin)

@Apollo122

This comment has been minimized.

Copy link
Contributor

Apollo122 commented Mar 8, 2018

An example output, maybe side by side comparison would have been nice.

@iperov

This comment has been minimized.

Copy link
Contributor

iperov commented Mar 8, 2018

I launched train with Daddario. Will see difference.

@acsaga

This comment has been minimized.

Copy link
Contributor Author

acsaga commented Mar 8, 2018

@Apollo122 FYI, here's a quick comparison:

Original Model:
loss:
faceswap-original-loss
output:
faceswap-original-image

IAE Model:
loss:
faceswap-iae-loss
output:
faceswap-iae-image

I trained the IAE model less time than the original model, and the original model has smaller loss function value. But you can see that the output of IAE model has better quality.

@acsaga

This comment has been minimized.

Copy link
Contributor Author

acsaga commented Mar 8, 2018

@Clorr Feedback is welcome.
I will try this model on the Trump/Cage dataset this weekend and post the result.

@Apollo122

This comment has been minimized.

Copy link
Contributor

Apollo122 commented Mar 8, 2018

@acsaga thanks for the comparison.
Does IAE need a separate converter? Can we use the Masked converter to merge ?

@ruah1984

This comment has been minimized.

Copy link

ruah1984 commented Mar 8, 2018

Can the existing model train with this IE model again ?

@kellurian

This comment has been minimized.

Copy link

kellurian commented Mar 9, 2018

You can add the multiGpu model calls in your model files as well and they work fine, so when we get it all interposed into one model file it should work great. bryanlyon seems to be doing this well in #256 . I dowloaded and tested your model with my two GPU's and the multiGPU additons and holy crap do they run fast! Funny enough, the old model ran my GTX 980 TI's at about 40% but it was up and down, this model runs it at about 70% but it is a seems to be much more constant. But they don't work with a previously tested model @ruah1984, it'll start the training over. I like this new model idea!

@Clorr

This comment has been minimized.

Copy link
Collaborator

Clorr commented Mar 9, 2018

@acsaga for my knowledge, is there some theoritical background for this? or is it the result of your experiment? It is a kind of variational autoencoder, isn't it?

@iperov

This comment has been minimized.

Copy link
Contributor

iperov commented Mar 9, 2018

AMAZING RESULT

@iperov

This comment has been minimized.

Copy link
Contributor

iperov commented Mar 9, 2018

after 18 hours of training there is no more loss changes (every 10 min):
2018-03-09_13-45-59

look at comparison video SFW
convert options same
[Link Removed]

at left - Original but my modified "hi-res" decoder:

def Decoder(self):
        input_ = Input(shape=(8, 8, 512))
        x = input_
        x = self.upscale(512)(x)
        x = self.upscale(256)(x)
        x = self.upscale(128)(x)
        x = self.upscale(64)(x)
        x = self.upscale(32)(x)
        x = Conv2D(3, kernel_size=5, padding='same', activation='sigmoid')(x)
        x = NearestNeighborDownsampler()(x)
        x = BicubicDownsampler()(x)        
        return KerasModel(input_, x)

so I got more detailed eyes

at right - subj IAE

now compare problem places:
[Images removed]
IAE better.

Now I will launch IAE but with "hi-res" decoder.

@acsaga

This comment has been minimized.

Copy link
Contributor Author

acsaga commented Mar 9, 2018

@Apollo122 Yes, this model use the same converter as the Original model.
@kellurian Thanks, I will add the multiGPU option and test it later.

@Clorr Yes, theoretically it can be interpreted as some kind of variational autoencoder.

The intuitive idea behind this model is that: A face has two types of information - appearance and facial expression.
What IAE model does is:

  1. Extract face information from a face image. (Encoder)
  2. Split the face information to two parts: facial expression and appearance. (intermediate layers, inter_A for appearance of face A, inter_B for appearance of face B, inter_both for facial expression of A and B)
  3. Combine face_A's facial expression and face_B's appearance as new information. (Concatenate intermediate layers)
  4. Generate new face image from new information. (Decoder)

Actually the original model does the same thing. IAE Model gets better result because it separates the steps into different layers. So that every layer focuses on only one task.
The original model mixes the steps: encoder does task No.1 and stores facial expressions. Decoder stores appearance and does task No.4. So it's hard to train.

@acsaga

This comment has been minimized.

Copy link
Contributor Author

acsaga commented Mar 9, 2018

@iperov Glad to hear it works 😄

@Clorr

This comment has been minimized.

Copy link
Collaborator

Clorr commented Mar 9, 2018

@acsaga thanks for this information.

I'm always fascinated to see how versatile NN are... How many faces do you think we could store in one model? Because in this case, we could add more intermediate layers for more people, don't you think?

Btw, if you have some inspiration, feel free to give more information in your init.py about yourself, credits, license and so on...

@ruah1984

This comment has been minimized.

Copy link

ruah1984 commented Mar 10, 2018

@acsaga ,if each layer of encoder we manage different information , when all information merge and generate (decoder)new face, will have more lost?

@ruah1984

This comment has been minimized.

Copy link

ruah1984 commented Mar 10, 2018

I try this yesterday night, is fast to get the clear and clean image compare to original model. Don't know if retrain decoder + IAE can be work together or not

@bryanlyon

This comment has been minimized.

Copy link
Collaborator

bryanlyon commented Mar 11, 2018

This is still a good add. I've modified it to support the new plugin loader, and the gan v2.0 update at https://github.com/bryanlyon/faceswap/tree/multi_gpu_iae . I will create a pull request if desired, but it requires #272 to be merged first.

@iperov

This comment has been minimized.

Copy link
Contributor

iperov commented Mar 11, 2018

got much different result vs original in man to man conversion

source man:
00075

dest man:
out00042

Original:
out00047

IAE:
out00047 2

IAE constructs absolutely new unrecognizable man. Also it adds some mustach.

@acsaga

This comment has been minimized.

Copy link
Contributor Author

acsaga commented Mar 11, 2018

18 hours training (1080Ti) on cage/trump task:

image

image

Please merge this pull request when it's ok for you. I think it's time to let more people use and test this model ;)

@Clorr @ruah1984 Principally, if there are enough different faces for training, I think the encoder in iae can be an universal face encoder (extract facial expression and appearance from any face), and the decoder in iae can be an universal face decoder (reconstruct face image from arbitrary combination of facial expression and appearance). Then Yes we can train many intermediate layers for many people, and swap faces for any two of them 🤔

@bryanlyon Thanks for adding multi gpu support for the model 👍

@iperov I guess the difference is because the model need more faces of "source man" (with different facial expression) to help the model splits latent variables into "facial expression" and "appearance“.
In your example, the unrecognizable man looks like a mix of source man and dest man, because the model failed to distinguish "facial expression" and "appearance“ information, and categorized too much information to "facial expression" layer (for example, the mustache).

@iperov

This comment has been minimized.

Copy link
Contributor

iperov commented Mar 11, 2018

@acsaga source man 3000 photos.

@Clorr Clorr merged commit 1b80de8 into deepfakes:master Mar 11, 2018
@Clorr

This comment has been minimized.

Copy link
Collaborator

Clorr commented Mar 11, 2018

Important notice: I renamed the model files so that users who try this don't overwrite their original model

Reanme your existing IAE model files if you did train some models previously! (I added a "IAE_" prefix)

@Kirin-kun

This comment has been minimized.

Copy link

Kirin-kun commented Mar 11, 2018

@iperov the eyes you produced with your "hi-res" decoder are awesome. Any chance to add it to the repo?

@iperov

This comment has been minimized.

Copy link
Contributor

iperov commented Mar 11, 2018

@Kirin-kun I dont know. It eats many vram, so works only with batch size=4 on my 6GB Card.
Also "hi-res" with IAE constructs absolutely new face, like transfer features A onto B face.

leebrian added a commit to leebrian/faceswap that referenced this pull request Apr 3, 2018
* Adds arg to select trainer used to create model (deepfakes#105)

Stops the layer count mismatch when a LowMem model is converted using the Original model.

* Added CUDA link for Ubuntu

* Adds information about running scripts with help (deepfakes#111)

Updated information so that users can quickly see the available parameters for the various scripts.
Also made the section headers correspond with the actual scripts - e.g. extract, train, convert

* Fix breakage on some versions of python3 (on Ubuntu 17.10 for sure)

Also gives better info when you don't include any arguments (or bad ones)

* Add demosaic (deepfakes#131)

Copy from https://github.com/dfaker/df/blob/master/merge_faces_larger.py#L127-L133

* Update Convert_Masked.py (deepfakes#130)

correct issue when seamless_clone is true

* Misc updates on master before GAN. Added multithreading + mmod face detector (deepfakes#109)

* Preparing GAN plugin

* Adding multithreading for extract

* Adding support for mmod human face detector

* Adding face filter argument

* Added process number argument to multiprocessing extractor.

Fixed progressbar for multiprocessing.

* Added tiff as image type.
compression artefacts hurt my feelings.

* Cleanup

* Changes I forgot to push :-/ (deepfakes#136)

* Adding GAN plugin (deepfakes#102)

Update GAN plugin to latest official version

* Correcting model paths

* Fixing ConvertImage has no attribute check_skip

deepfakes#143

* print out which image caused error (deepfakes#147)

* Added serializers.

This speeds up convert speed x4 on my machine.

* lowercase mask_type (deepfakes#181)

The mask_type passed in to this function is lowercase, changing string literals to match.

* New arg. (deepfakes#177)

* Documentation and grammar (deepfakes#167)

* Fixed gramatical mistake

* Improved documentation and fixed spelling and grammar

* fixes for some comments

* should fix full paths, and windowspath issues

* didnt import os in extract.

* fixed and tested, briefly with gan and original.

* Skip images that arent in the alignments.json

* no os import in convert.

* fix for have_face.

* Update INSTALL.md (deepfakes#178)

INSTALL.md now aligns with requirements in script.

* Clearer requirements for each platform (deepfakes#183)

* Adding allow_growth option (deepfakes#140)

* Add GNU General Public License v3.0

* Refactoring scripts/extract.py (deepfakes#216)

Refactored extract to have handleImage take in a image instead of a filename. This will be useful for writing future extensions for extract where you may not have a file on disk but you do have a image in memory. For example extracting faces from a video.

* Match the command to split/generate video (deepfakes#196)

* Correcting a bug in handleImage call

* Correcting extract.py

* port 'face_alignment' from PyTorch to Keras. (deepfakes#228)

* port 'face_alignment' from PyTorch to Keras. It works x2 faster, but initialization takes 20secs.

2DFAN-4.h5 and mmod_human_face_detector.dat included in lib\FaceLandmarksExtractor

fixed dlib vs tensorflow conflict: dlib must do op first, then load keras model, otherwise CUDA OOM error

if face location not found by CNN, its try to find by HOG.

removed this:
-        if face.landmarks == None:
-            print("Warning! landmarks not found. Switching to crop!")
-            return cv2.resize(face.image, (size, size))
because DetectedFace always has landmarks

* removing DetectedFace.landmarks

* fix issue deepfakes#234 (deepfakes#235)

* fix deepfakes#233, deepfakes#237, + working with any batch size (deepfakes#236)

* fix working with any batch size

* fix deepfakes#233 - primordial bug when dlib recognizes less faces than FakeApp, reason: rgb-bgr order affects chance of face recognition. Numpy.imread and cv2.imread loads same image in different rgb-bgr order.

fix deepfakes#237

* fix

* Allows applying dilation by passing negative erosion kernel values. If value is negative, … (deepfakes#238)

* Allows for negative erosion kernel for -e arg. If value is negative, it turns it into a dilation kernel, which allow facehullandrect to cover more space. Can help to cover double eyebrows. Also could be useful with Masked converter for GAN that oatsss is working on.

* Update convert.py

Modified argument help to clarify the effects of erosion and dilation as parameters

* Add debugging option for drawing landmarks on face extraction. (deepfakes#199)

* backslash problem correction

fix for deepfakes#239

* Skip already extracted frames when using extract.py (deepfakes#214)

* Pytorch and face-alignment

* Skip processed frames when extracting faces.

* Reset to master version

* Reset to master

* Added --skip-existing argument to Extract script. Default is to NOT skip already processed frames.
Added logic to write_alignments to append new alignments (and preserve existing ones)
to existing alignments file when the skip-existing option is used.

* Fixed exception for --skip-existing when using the convert script

* Sync with upstream

* Fixed error when using Convert script.

* Bug fix

* Merges alignments only if --skip-existing is used.

* Creates output dir when not found, even when using --skip-existing.

* Update GAN64 to v2 (deepfakes#217)

* Clearer requirements for each platform

* Refactoring of old plugins (Model_Original + Extract_Align) + Cleanups

* Adding GAN128

* Update GAN to v2

* Create instance_normalization.py

* Fix decoder output

* Revert "Fix decoder output"

This reverts commit 3a8ecb8.

* Fix convert

* Enable all options except perceptual_loss by default

* Disable instance norm

* Update Model.py

* Update Trainer.py

* Match GAN128 to shaoanlu's latest v2

* Add first_order to GAN128

* Disable `use_perceptual_loss`

* Fix call to `self.first_order`

* Switch to average loss in output

* Constrain average to last 100 iterations

* Fix math, constrain average to intervals of 100

* Fix math averaging again

* Remove math and simplify this damn averagin

* Add gan128 conversion

* Update convert.py

* Use non-warped images in masked preview

* Add K.set_learning_phase(1) to gan64

* Add K.set_learning_phase(1) to gan128

* Add missing keras import

* Use non-warped images in masked preview for gan128

* Exclude deleted faces from conversion

* --input-aligned-dir defaults to "{input_dir}/aligned"

* Simplify map operation

* port 'face_alignment' from PyTorch to Keras. It works x2 faster, but initialization takes 20secs.

2DFAN-4.h5 and mmod_human_face_detector.dat included in lib\FaceLandmarksExtractor

fixed dlib vs tensorflow conflict: dlib must do op first, then load keras model, otherwise CUDA OOM error

if face location not found by CNN, its try to find by HOG.

removed this:
-        if face.landmarks == None:
-            print("Warning! landmarks not found. Switching to crop!")
-            return cv2.resize(face.image, (size, size))
because DetectedFace always has landmarks

* Enabled masked converter for GAN models

* Histogram matching, cli option for perceptual loss

* Fix init() positional args error

* Add backwards compatibility for aligned filenames

* Fix masked converter

* Remove GAN converters

* Fix line endings (deepfakes#266)

* Remove files with line-ending issues

* Add back files with line-ending issues

* Fixes (deepfakes#267)

* Use k-nn for face filtering (deepfakes#262)

* Add negative filters for face detection

When detecting faces that are very similar, the face recognition can
produce positive results for similar looking people. This commit allows
the user to add multiple positive and negative reference images. The
facedetection then calculates the distance to each reference image
and tries to guess which is more likely using the k-nearest method.

* Do not calculate knn if no negative images are given

* Clean up outputting

* sorttool.py

* PluginLoader.get_available_models()
PluginLoader.get_default_model()
provides easy integration of Model_* folders without changing convert.py and train.py

* Fixing number of args

* Add image rotation for detecting more faces and dealing with awkward angles (deepfakes#253)

* Image rotator for extract and convert ready for testing

* Revert "Image rotator for extract and convert ready for testing"

This reverts commit bbeb19e.

Error in extract code

* add image rotation support to detect more faces

* Update convert.py

Amended to do a single check for for rotation rather than checking twice. Performance gain is likely to be marginal to non-existent, but can't hurt.

* Update convert.py

remove type

* cli.py: Only output message on verbose. Convert.py: Only check for rotation amount once

* Changed command line flag to take arguments to ease future development

* Realigning for upstream/Master

* Minor fix

* Fix for missing default rotation value (deepfakes#269)

* Image rotator for extract and convert ready for testing

* Revert "Image rotator for extract and convert ready for testing"

This reverts commit bbeb19e.

Error in extract code

* add image rotation support to detect more faces

* Update convert.py

Amended to do a single check for for rotation rather than checking twice. Performance gain is likely to be marginal to non-existent, but can't hurt.

* Update convert.py

remove type

* cli.py: Only output message on verbose. Convert.py: Only check for rotation amount once

* Changed command line flag to take arguments to ease future development

* Realigning for upstream/Master

* Minor fix

* Change default rotation value from None to 0

* Add Improved AutoEncoder model. (deepfakes#251)

* Add Improved AutoEncoder model.

* Refactoring Model_IAE to match the new model folder structure

* Add Model_IAE in plugins

* Improving performance of extraction. Two main changes to improve the … (deepfakes#259)

* Improving performance of extraction. Two main changes to improve the most recent modifications to extract: 1st FaceLandmarkExtractor would try to use cnn first, then try hog. The problem was that this reduced the speed by 4 for images where cnn didn't find anything, and most of the times hog wouldn't find anything either or it would be a bad extract. For me it wasn't worth it. With this you can specify on input -D if you want to use hog, cnn, or all. 'all' will try cnn, then hog like FaceLandmarkExtractor was doing. cnn or hog will just use 1 detection method. 2nd change is a rehaul of the verbose parameter. Now warnings when a face is not detected will just be shown if indicated by -v or --verbose. This restores the verbose function to what it once was. With this change I was able to process 1,000 per each 4 minutes regardless if faces were detected or not. Performance improvement just applies to not detected images but I normally will have lots of images without clear faces in my set, so I figured it would impact others. Also the introduction of 'all' would allow trying other models together more easily in the future.

* Update faces_detect.py

* Update extract.py

* Update FaceLandmarksExtractor.py

* spacing fix

* Renaming model files

* Fix to Model_IAE not working (deepfakes#275)

Fix to Model_IAE giving excess positional arguments error.

* -by face

* Add Multi-GPU support (deepfakes#272)

* Add Improved AutoEncoder model.

* Refactoring Model_IAE to match the new model folder structure

* Add Model_IAE in plugins

* Add Multi-GPU support

I added multi-GPU support to the new model layout.  Currently, Original is not tested (due to OOM on my 2x 4gb 970s).  LowMem is not tested with the current commit due to it not being available since the new pluginloader misses it.

* Fix broken multigpu GAN loading. (deepfakes#280)

* Fix broken multigpu GAN loading.

Fix for loading into the multi GPU model when it needs to load/save the original model.

* Move reference change.

Move reference change for single model so it is defined before load

* Add mutliGPU support to GAN128

Added support for mutligpu to GAN128

* Added command line used in other relevant steps

* Output Sharpening Added (deepfakes#285)

* Updated to support Output Sharpening arguments

Two new types of output sharpening methods have been added.

One that deals with a Box Blur method and the other with a Gaussian Blur method.

Box Blur method can be called using argument '-sh bsharpen' --- This method is not dynamic and can produce strong sharpening on your images. Sometimes it can yield great results and sometimes entirely the opposite.

Gaussian Blur method can be called using argument '-sh gsharpen' --- This method is dynamic and tries to adjust to your data set. As a result, while the sharpening effect might not be as strong as bsharpen, it is bound to produce a more natural looking sharpened image.

By default the parameter is set to none which will not run any sharpening on your output.

* Output Sharpening added

Two ways of sharpening your output have been added

-sh bsharpen
-sh gsharpen

* Update convert.py

* Fix padding problem with gan conversion (deepfakes#289)

* Fix padding problem with gan conversion

* Revert gan transform_args

* Align Eyes Horizontally After Umeyama + Blur Detection (deepfakes#242)

* Align eyes after umeyama

* Remove comment

* Add cli option

* Update Extract_Crop.py

* Fix convert

* Add blur threshold

* Use mask in blur detection

* Improve blur detection

* Fix indents

* Update extract.py

* Converted LowMem model to the new structure (deepfakes#292)

* converted lowmem to the new structure

* removed old lowmem

* Fix gan128 (deepfakes#288)

* Complete parity fix for GAN128.

This brings GAN128 to parity with GAN in terms of multi GPU support.

Unfortunately, it fails to run due to a naming error.

('The name "model_4" is used 2 times in the model. All layer names should be unique. Layer names: ', ['input_6', 'lambda_1', 'lambda_2', 'model_6', 'model_4', 'model_4'])

* Fix for GAN128 until deepfakes#287 can be resolved

Issue deepfakes#287 details why GAN128 cannot be fixed until Keras is fixed upstream.

* Changes to PR as per Clorr

Made changes to error handling to split into separate PR as requested by Clorr.

* Fixed error handling in train.py (deepfakes#293)

Fixed the error handling in train.py so it doesn't swallow tracelogs.

* Adding tools.py as main script for using tools, as well as integrating all feature requests from deepfakes#255 and deepfakes#278 (deepfakes#298)

* Add tools.py command and control script for use as the main interface for various tools. The structure and approach is the same as faceswap.py
Add many new features to tools/sort.py: various new sorting methods, grouping by folders, logging file renaming/movemeng, keeping original files in the input directory and improved cli options documentation. Argument parsing has been re-written to inteface with tools.py
Add __init__.py empty file in tools directory for python to register it as a module so that sort.py and future tools can be easily imported.

* Fix various bugs where the correct sorting method would not get called.
Add new sorting methon: face-cnn-dissim.
Update help documentation for face-cnn-dissim.
Change default grouping to rename.
Update initial print in all sorting/grouping methods to say precisely which method is being used.

* Major refactor and redesign.
Use dynamic method allocation to avoid large amounts of nested if-elif statements in process() function and to allow easily combine sort and group methods.

Change cli arguments to make them more intuitive and work with the new design.
Previous: '-g/--grouping' -> '-f/--final-processing' {folders,rename}
Previous: '-by/--by' -> '-s/--sort-by' {blur,face,face-cnn,face-cnn-dissim,face-dissim,hist,hist-dissim}
New: '-g/--group-by' {blur,face,face-cnn,hist}
Add: '--logfile' -> '-lg/--logfile' PATH_TO_LOGFILE

Greatly improve grouping performance.
Grouping now has to sort using one of the sorting methods which makes the grouping stable and no longer dependent on how well the the target files are already sorted.
Sorting and grouping methods can be combined in any way. If no -g/--group-by is specified by user, it will default to group by the non '-dissim' version of sort method.
Different combinations of sorting and grouping methods work well for different sets of data.

Fixes
Fix progress updates not showing properly by setting them to print to stdout instead of stderror.
Fix bug in grouping by face-cnn where wrong score method was being called.

Misc
Add documentation for reload_list() and splice_lists() methods because it's not obvious what they do.
Add warning message to tools.py to tell users to make sure they understand how the tool they want to use works before using it.
Add warning message to tools/sort.py to tell users to make sure they undrestand how the sort tool works before using it.
Update help documentation to reflect new functionality and options.
Set defaults for group by face-cnn to work properly with the correct score method.
Amend commit in order to sign it.

* Perform unittests for all options and combinations of sort and group methods: everything OK.
Fix typos in help documentation.

* Mask refinement option for GAN128 (deepfakes#308)

I was experimenting on GAN128 and saw that mask refinement option was missing so i added that.
Its default value is False and its optional after 15k iterations.

Based on this: https://render.githubusercontent.com/view/ipynb?commit=87d6e7a28ce754acd38d885367b6ceb0be92ec54&enc_url=68747470733a2f2f7261772e67697468756275736572636f6e74656e742e636f6d2f7368616f616e6c752f66616365737761702d47414e2f383764366537613238636537353461636433386438383533363762366365623062653932656335342f46616365537761705f47414e5f76325f737a3132385f747261696e2e6970796e62&nwo=shaoanlu%2Ffaceswap-GAN&path=FaceSwap_GAN_v2_sz128_train.ipynb&repository_id=115182783&repository_type=Repository#Tips-for-mask-refinement-(optional-after-%3E15k-iters)

* Fix "raw"/"masked" preview labeling

* Fix "raw"/"masked" labeling

* sorttool - fixes, and added --sort-by face-yaw (deepfakes#312)

* Renaming BGR/RGB inputs

* Fix to an UnboundLocalError due to rename. (deepfakes#318)

* FaceLandmarksExtractor comment (deepfakes#317)

* Adds support for arbitrary image rotations (deepfakes#309)

* Add support for user-specified rotation angle in extract

* Added rotation-angle-list option to enumerate a list of angles to rotate through

* Adjust rotation matrix translation coords to avoid cropping

* Merged rotation-angle and rotation-angle-list options into rotate_images option

* Backwards compatibility

* Updated check whether to run image rotator

* Switched rotation convention to use positive angle = clockwise rotation, for backwards compatibility

* Revert " Fix to an UnboundLocalError due to rename." -- Bind the variable in question, rather than replace it with another parameter (deepfakes#320)

* Switching naming of _bgr as discussed to reverse detector call

Switching naming of _bgr as discussed to reverse detector call
renaming iterator in loop for clarity

* Revert "Adds support for arbitrary image rotations (deepfakes#309)"

This reverts commit 44dfd9d.

* Revert "FaceLandmarksExtractor comment (deepfakes#317)"

This reverts commit f79c487.

* Revert "Fix to an UnboundLocalError due to rename. (deepfakes#318)"

This reverts commit 2e2dc84.

* Correction to UnboundLocalError

Tested that "sort -s face-cnn" works correctly after chagnes to FaceLandmarksExtractor.py
@PhenomenalOnee

This comment has been minimized.

Copy link

PhenomenalOnee commented Sep 4, 2019

Where i can find the pretrained IAE model

@torzdf

This comment has been minimized.

Copy link
Collaborator

torzdf commented Sep 4, 2019

There are no pretrained models

@PhenomenalOnee

This comment has been minimized.

Copy link

PhenomenalOnee commented Sep 5, 2019

How can i train the IAE model can you @torzdf, provide me any scripts for data preparation and training. And also how is the predicted face and mask is merged on the destination face

@PhenomenalOnee

This comment has been minimized.

Copy link

PhenomenalOnee commented Sep 5, 2019

Can someone explain how to train the model IAE step by step
currently i am not able to interpretate how faceswap.py extract works which functions it calls etc

@torzdf

This comment has been minimized.

Copy link
Collaborator

torzdf commented Sep 5, 2019

@PhenomenalOnee

This comment has been minimized.

Copy link

PhenomenalOnee commented Sep 6, 2019

Thanks @torzdf Can you explain how the predicted face is wrapped and swapped with the source face and how mask is formed

@kvrooman

This comment has been minimized.

Copy link
Collaborator

kvrooman commented Sep 6, 2019

These sorts of questions are best asked on the support forum or on the discord server. GitHub is typically used for bug reporting and testing features.

@AliDjango

This comment has been minimized.

Copy link

AliDjango commented Oct 29, 2019

@Clorr Feedback is welcome.
I will try this model on the Trump/Cage dataset this weekend and post the result.

Is this dataset available to download?I want a dataset, so I can see how well different models train by time before making my own training

@torzdf

This comment has been minimized.

Copy link
Collaborator

torzdf commented Oct 29, 2019

Faceswap does not provide pre-trained models.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
You can’t perform that action at this time.