Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve usage for non technical people #2

Closed
deepfakes opened this issue Dec 19, 2017 · 13 comments
Closed

Improve usage for non technical people #2

deepfakes opened this issue Dec 19, 2017 · 13 comments

Comments

@deepfakes
Copy link
Owner

It would be great to facilitate use for non technical people. IE by making this an .exe with PyInstaller, or something similar. Any idea is welcome....

@Ganonmaster
Copy link
Contributor

I would see separate GPU and CPU versions for each target OS.

What I wonder, is how the process would look for the end user. Seeing as the training portion takes the most time, you could initially start by providing the basic "image processor" to the user, and have more advanced users share their models in ready made archives. I'm unfamiliar with the format, so I'm not sure if that opens it up to some kind of security issue. However, I feel like that could be an easy way of simplifying the process for non-technical people. Your video/picture + a pre-made swappable model = easy result.

@cercata
Copy link

cercata commented Dec 19, 2017

It's quite easy making the installation on windows x64, for CPU based processing:

Until some GUI is added, I don't think it's worth packaging it ...

@Ganonmaster
Copy link
Contributor

Ganonmaster commented Dec 19, 2017

In theory an installation like that works, but for development, you ideally you want it in some kind of separate environment. Virtualenv or something, so it does not conflict with any local packages you may already have installed.

Packaging is not necessary immediatelly, but I can imagine having prebuilt executables of a command line version is already a great improvement. It allows other devs to setup simple frontends that interface with the command-line tool. Having a pre-built command-line tool is a great step towards coupling it with both web and native frontends.

@Ganonmaster
Copy link
Contributor

Basic command line usage is almost ready. Can probably look into cooking up pre-built packages once that's done. I might look into some CI stuff to make semi-automated builds, but that might be goals to shoot for after christmas or start of 2018.

@Yutsa
Copy link
Contributor

Yutsa commented Dec 23, 2017

When I tried running the convert.py script I had the following error :

Failed to extract from image: /home/edouard/Téléchargements/faceswap-data/source-images/lawrence/107.jpg. Reason: Unable to open contrib/shape_predictor_68_face_landmarks.dat

I had to look into the code to know what to do.

I guess we'll need a way to get this file without having to download it separatly if we want to make it user friendly. Or say explictily how and where to download it if it is missing when executing the script.

@Ganonmaster
Copy link
Contributor

@Yutsa I merged your PR with the error message. We can most likely also improve the behavior for handling non-existent input/output directories with some "folder does not exist, create it? y/n" dialogues.

@Yutsa
Copy link
Contributor

Yutsa commented Dec 23, 2017

I am working on using one main script that calls the other.

The idea is to get commands like this ./faceswap.py train, ./faceswap.py extract, ./faceswap.py convert.

I don't know if this is what you want though.

@Ganonmaster
Copy link
Contributor

Yes, you can achieve this easily using the subparser feature that comes as part of argparse. I was going to do this myself, but I may not have time today.

@Ganonmaster
Copy link
Contributor

I'm currently looking into creating an initial self-contained build of the command-line tool in its current state. PyInstaller seems like a good option, although I fear it might result in very large binaries.

Would be nice to release it on January 1st if possible!

@Yutsa
Copy link
Contributor

Yutsa commented Dec 29, 2017

Shouldn't we provide to Dockerfiles ? One for GPU and the other for CPU ?

Ideally we should upload both images on Docker Hub, that way a user would only have to install docker and run the docker run -it --rm -v [sourceFolder]:/srv -it user/faceswap:tag bash

I also thought maybe we could add a function to create the video from the swapped pictures after convert maybe ?

Ideally if we could from the same tool cut the video into frames, cut faces, train, replace faces and assemble the video back it would be the best for a user.

@Ganonmaster
Copy link
Contributor

In order to use Docker with a GPU, you'd need to split it up further between OpenCL and CUDA. To use Nvidia CUDA with docker, the only possible way is nvidia-docker, which is only supported on Linux. In addition, the same is true for Docker with OpenCL. As far as I know, OpenCL support in Docker involves manually giving your docker container access to your host's GPU driver.

So, assuming that Linux users are per definition more tech-savvy than Windows or macOS users and considering that Dockerized GPU support is limited to Linux for the time being, I strongly believe that the usage of Docker does not help non-technical users at all. Yes, it's nice if you're savvy with Docker and containers, but is that really even a desirable method of distribution for people who have very limited command line knowledge?

I do not think Docker GPU support is going to help in getting this project usable for non-technical users. A Docker Hub image would be nice, but it would be limited to a CPU version, because getting it working on a GPU (CUDA or OpenCL) would still be a pain.

Still, I would encourage you to research and play around with this, and provide pull requests if you manage to get it working.

@Yutsa
Copy link
Contributor

Yutsa commented Dec 29, 2017

Yeah I was actually looking at nvidia-docker, it looks complicated to have a Docker container with GPU.

I guess we'll have to wait for this to be better supported.

@gdunstone
Copy link
Contributor

for dependencies a useful command is:
pip3 install -r requirements{-gpu}.txt

we can also create 2x pypi packages.

@Clorr Clorr closed this as completed Mar 1, 2018
@fat-tire fat-tire mentioned this issue Apr 8, 2018
torzdf pushed a commit that referenced this issue Sep 15, 2018
* slight re-factoring
removed Shaonlu decoder/encoder

* added HIGHRES high resolution autoencoder allowing setting IMAGE_SHAPE up to 256x256
added STANDARD autoencoder which should improve image quality for regular resolutions - if you have enough video memory and don't care about 256x res you should switch to from ORIGINAL (backup your save data as these are incompatible)
added SubpixelUpscaler - slightly crisper picture at cost of both training duration and memory (can be used with any saves, doesn't destroy data)
added DSSIMObjective alternative loss function (can be used with any saves, doesn't destroy data)

* Forgot to switch back to 128x128
torzdf added a commit that referenced this issue Sep 15, 2018
* Initial alignment tool implementation

* Add face processing. Fix helptext formatting for cli and gui

* Extract faces from alignments file

* Add draw landmarks

* fix analysis hours past 24 bug

* Remove tools flag. Enlarge GUI tooltips

* Add filename extension check for image folders

* Fix folder nonetype bug and linting

* Bugfix renaming of faces

* Import alignments from DeepFaceLab

* Remove frames requirement for dfl

* Alignments Tool: Add remove frames

* Implement Frame display for manual extract

* Alignments refactor

* Further navigation improvements

* Further navigation improvements

* Alignment Tool: Refactor

* Move rotate_image_by_angle() to utils

* Further refactoring improvements

* Reduce key bindings. Add skip to has faces, Add resize frame binding

* More navigation options. Delete faces added. Save alignments added

* Add: Reload alignments. Add extra api calls. Better redraw handling.

* Manual Alignments - Alpha Build

* Alignments - Manual Processing (#495)

* Implement Frame display for manual extract

* Further navigation improvements

* Alignment Tool: Refactor

* Move rotate_image_by_angle() to utils

* Reduce key bindings. Add skip to has faces, Add resize frame binding

* More navigation options. Delete faces added. Save alignments added

* Add: Reload alignments. Add extra api calls. Better redraw handling.

* Manual Alignments - Alpha Build

* Rotate landmarks for rotated images

* slight re-factoring (#494)

removed Shaonlu decoder/encoder

* OriginalHighRes model refactoring, take #2 (#497)

* slight re-factoring
removed Shaonlu decoder/encoder

* added HIGHRES high resolution autoencoder allowing setting IMAGE_SHAPE up to 256x256
added STANDARD autoencoder which should improve image quality for regular resolutions - if you have enough video memory and don't care about 256x res you should switch to from ORIGINAL (backup your save data as these are incompatible)
added SubpixelUpscaler - slightly crisper picture at cost of both training duration and memory (can be used with any saves, doesn't destroy data)
added DSSIMObjective alternative loss function (can be used with any saves, doesn't destroy data)

* Forgot to switch back to 128x128

* Fix extract. Remove faces.r from landmarks (rotate landmarks on extract)
torzdf added a commit that referenced this issue Feb 9, 2019
* model_refactor (#571)

* original model to new structure

* IAE model to new structure

* OriginalHiRes to new structure

* Fix trainer for different resolutions

* Initial config implementation

* Configparse library added

* improved training data loader

* dfaker model working

* Add logging to training functions

* Non blocking input for cli training

* Add error handling to threads. Add non-mp queues to queue_handler

* Improved Model Building and NNMeta

* refactor lib/models

* training refactor. DFL H128 model Implementation

* Dfaker - use hashes

* Move timelapse. Remove perceptual loss arg

* Update INSTALL.md. Add logger formatting. Update Dfaker training

* DFL h128 partially ported

* Add mask to dfaker (#573)

* Remove old models. Add mask to dfaker

* dfl mask. Make masks selectable in config (#575)

* DFL H128 Mask. Mask type selectable in config.

* remove gan_v2_2

* Creating Input Size config for models

Creating Input Size config for models

Will be used downstream in converters.

Also name change of image_shape to input_shape to clarify ( for future models with potentially different output_shapes)

* Add mask loss options to config

* MTCNN options to config.ini. Remove GAN config. Update USAGE.md

* Add sliders for numerical values in GUI

* Add config plugins menu to gui. Validate config

* Only backup model if loss has dropped. Get training working again

* bugfixes

* Standardise loss printing

* GUI idle cpu fixes. Graph loss fix.

* mutli-gpu logging bugfix

* Merge branch 'staging' into train_refactor

* backup state file

* Crash protection: Only backup if both total losses have dropped

* Port OriginalHiRes_RC4 to train_refactor (OriginalHiRes)

* Load and save model structure with weights

* Slight code update

* Improve config loader. Add subpixel opt to all models. Config to state

* Show samples... wrong input

* Remove AE topology. Add input/output shapes to State

* Port original_villain (birb/VillainGuy) model to faceswap

* Add plugin info to GUI config pages

* Load input shape from state. IAE Config options.

* Fix transform_kwargs.
Coverage to ratio.
Bugfix mask detection

* Suppress keras userwarnings.
Automate zoom.
Coverage_ratio to model def.

* Consolidation of converters & refactor (#574)

* Consolidation of converters & refactor

Initial Upload of alpha

Items
- consolidate convert_mased & convert_adjust into one converter
-add average color adjust to convert_masked
-allow mask transition blur size to be a fixed integer of pixels and a fraction of the facial mask size
-allow erosion/dilation size to be a fixed integer of pixels and a fraction of the facial mask size
-eliminate redundant type conversions to avoid multiple round-off errors
-refactor loops for vectorization/speed
-reorganize for clarity & style changes

TODO
- bug/issues with warping the new face onto a transparent old image...use a cleanup mask for now
- issues with mask border giving black ring at zero erosion .. investigate
- remove GAN ??
- test enlargment factors of umeyama standard face .. match to coverage factor
- make enlargment factor a model parameter
- remove convert_adjusted and referencing code when finished

* Update Convert_Masked.py

default blur size of 2 to match original...
description of enlargement tests
breakout matrxi scaling into def

* Enlargment scale as a cli parameter

* Update cli.py

* dynamic interpolation algorithm

Compute x & y scale factors from the affine matrix on the fly by QR decomp.
Choose interpolation alogrithm for the affine warp based on an upsample or downsample for each image

* input size
input size from config

* fix issues with <1.0 erosion

* Update convert.py

* Update Convert_Adjust.py

more work on the way to merginf

* Clean up help note on sharpen

* cleanup seamless

* Delete Convert_Adjust.py

* Update umeyama.py

* Update training_data.py

* swapping

* segmentation stub

* changes to convert.str

* Update masked.py

* Backwards compatibility fix for models
Get converter running

* Convert:
Move masks to class.
bugfix blur_size
some linting

* mask fix

* convert fixes

- missing facehull_rect re-added
- coverage to %
- corrected coverage logic
- cleanup of gui option ordering

* Update cli.py

* default for blur

* Update masked.py

* added preliminary low_mem version of OriginalHighRes model plugin

* Code cleanup, minor fixes

* Update masked.py

* Update masked.py

* Add dfl mask to convert

* histogram fix & seamless location

* update

* revert

* bugfix: Load actual configuration in gui

* Standardize nn_blocks

* Update cli.py

* Minor code amends

* Fix Original HiRes model

* Add masks to preview output for mask trainers
refactor trainer.__base.py

* Masked trainers converter support

* convert bugfix

* Bugfix: Converter for masked (dfl/dfaker) trainers

* Additional Losses (#592)

* initial upload

* Delete blur.py

* default initializer = He instead of Glorot (#588)

* Allow kernel_initializer to be overridable

* Add ICNR Initializer option for upscale on all models.

* Hopefully fixes RSoDs with original-highres model plugin

* remove debug line

* Original-HighRes model plugin Red Screen of Death fix, take #2

* Move global options to _base. Rename Villain model

* clipnorm and res block biases

* scale the end of res block

* res block

* dfaker pre-activation res

* OHRES pre-activation

* villain pre-activation

* tabs/space in nn_blocks

* fix for histogram with mask all set to zero

* fix to prevent two networks with same name

* GUI: Wider tooltips. Improve TQDM capture

* Fix regex bug

* Convert padding=48 to ratio of image size

* Add size option to alignments tool extract

* Pass through training image size to convert from model

* Convert: Pull training coverage from model

* convert: coverage, blur and erode to percent

* simplify matrix scaling

* ordering of sliders in train

* Add matrix scaling to utils. Use interpolation in lib.aligner transform

* masked.py Import get_matrix_scaling from utils

* fix circular import

* Update masked.py

* quick fix for matrix scaling

* testing thus for now

* tqdm regex capture bugfix

* Minor ammends

* blur size cleanup

* Remove coverage option from convert (Now cascades from model)

* Implement convert for all model types

* Add mask option and coverage option to all existing models

* bugfix for model loading on convert

* debug print removal

* Bugfix for masks in dfl_h128 and iae

* Update preview display. Add preview scaling to cli

* mask notes

* Delete training_data_v2.py

errant file

* training data variables

* Fix timelapse function

* Add new config items to state file for legacy purposes

* Slight GUI tweak

* Raise exception if problem with loaded model

* Add Tensorboard support (Logs stored in model directory)

* ICNR fix

* loss bugfix

* convert bugfix

* Move ini files to config folder. Make TensorBoard optional

* Fix training data for unbalanced inputs/outputs

* Fix config "none" test

* Keep helptext in .ini files when saving config from GUI

* Remove frame_dims from alignments

* Add no-flip and warp-to-landmarks cli options

* Revert OHR to RC4_fix version

* Fix lowmem mode on OHR model

* padding to variable

* Save models in parallel threads

* Speed-up of res_block stability

* Automated Reflection Padding

* Reflect Padding as a training option

Includes auto-calculation of proper padding shapes, input_shapes, output_shapes

Flag included in config now

* rest of reflect padding

* Move TB logging to cli. Session info to state file

* Add session iterations to state file

* Add recent files to menu. GUI code tidy up

* [GUI] Fix recent file list update issue

* Add correct loss names to TensorBoard logs

* Update live graph to use TensorBoard and remove animation

* Fix analysis tab. GUI optimizations

* Analysis Graph popup to Tensorboard Logs

* [GUI] Bug fix for graphing for models with hypens in name

* [GUI] Correctly split loss to tabs during training

* [GUI] Add loss type selection to analysis graph

* Fix store command name in recent files. Switch to correct tab on open

* [GUI] Disable training graph when 'no-logs' is selected

* Fix graphing race condition

* rename original_hires model to unbalanced
kvrooman referenced this issue in kvrooman/faceswap_ Feb 26, 2019
# This is the 1st commit message:

remove gan_v2_2

# The commit message #2 will be skipped:

# Add mask loss options to config

# The commit message #3 will be skipped:

# Creating Input Size config for models
#
# Creating Input Size config for models
#
# Will be used downstream in converters.
#
# Also name change of image_shape to input_shape to clarify ( for future models with potentially different output_shapes)

# The commit message #4 will be skipped:

# MTCNN options to config.ini. Remove GAN config. Update USAGE.md

# The commit message #5 will be skipped:

# Add sliders for numerical values in GUI

# The commit message deepfakes#6 will be skipped:

# Add config plugins menu to gui. Validate config

# The commit message deepfakes#7 will be skipped:

# Only backup model if loss has dropped. Get training working again

# The commit message deepfakes#8 will be skipped:

# bugfixes

# The commit message deepfakes#9 will be skipped:

# Standardise loss printing

# The commit message deepfakes#10 will be skipped:

# GUI idle cpu fixes. Graph loss fix.

# The commit message deepfakes#11 will be skipped:

# cleanup test code

# The commit message deepfakes#12 will be skipped:

# Update seg_face.py

# The commit message deepfakes#13 will be skipped:

# Update seg_face.py

# The commit message deepfakes#14 will be skipped:

# Update seg_face.py

# The commit message deepfakes#15 will be skipped:

# Create blur.py
Repository owner deleted a comment from RunetX Jun 28, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

6 participants