forked from iperov/DeepFaceLab
-
Notifications
You must be signed in to change notification settings - Fork 22
New Training Options
Wyverex-GR6 edited this page Nov 19, 2022
·
11 revisions
The MVE-DFL fork introduces new training options, both to help you better see training progress (via new Preview Panel that can include more pictures, and has pictures' file names shown), and to help you train better models via new augmentation options. It also allows for the use of configuration files to save / re-use / manually edit settings without the need to go through all the options every time you run training.
-
Session name
you can name this training session, and it will be saved insummary.txt
so you can later review / remind yourself of the options you selected in this session -
Maximum N backups
If your disk space is limited, or you simply do not want too many backups, you can limit the maximum amount of backups that are saved (latest ones will replace the earliest one) -
Number of samples to preview
Number of pictures you wanna preview during training. Too big s number will easily result in preview panel going out of your screen. -
Use old preview panel
If you would prefer to use the old preview panel, instead of the new one.
-
Retrain high loss samples
Periodically retrains the last 16 "high-loss" samples - the ones that are the most problematic for the model. -
Use fp16
Experimental. It should increase training/inference speed and reduce model size, but it can crash the model or cause other issues. -
Max cpu cores to use
If you need or want to limit how many CPU cores will be used. Recommended not to go over the number of physical cores. 8 is considered a good value -
Loss function
You can change the loss function that is used for image quality assessment. Options areSSIM
,MS-SSIM
, orMS-SSIM+L1
-
Learning rate
The rate at which the model learns. Lower value can reduce the chance for model collapse and increase the amount of learned details, but it will slow down the training. Default value is 5e-05 (0.00005)
All of these challenge the model which results in better model learning, at the cost of training time.
-
Enable random downsample of samples
Challenges model by making some samples smaller. -
Enable random noise added to samples
Challenges model by adding noise to some samples. -
Enable random blur of samples
Challenges model by adding blur to some samples. -
Enable random jpeg compression of samples
Challenges model by applying jpeg compression's quality degradation to some samples. -
Enable random shadows and highlights of samples
Helps to create dark light areas in dataset. Usesrc
if you src dataset has a lack of shadows / different lighting situations;dst
to help generalization; orall
for both reasons. -
Random color
Samples are randomly rotated around the L axis in LAB colorspace, helps generalize training