E:\OpenFaceSwap\python\python-3.6.3.amd64\lib\site-packages\h5py\__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`. from ._conv import register_converters as _register_converters Using TensorFlow backend. Model A Directory: E:\Documents\Faceswap\facesA Model B Directory: E:\Documents\Faceswap\facesB Training data directory: E:\Documents\Faceswap\model Loading data, this may take a while... Loading Model from Model_LowMem plugin... Using live preview WARNING:tensorflow:From E:\OpenFaceSwap\python\python-3.6.3.amd64\lib\site-packages\keras\backend\tensorflow_backend.py:1264: calling reduce_prod (from tensorflow.python.ops.math_ops) with keep_dims is deprecated and will be removed in a future version. Instructions for updating: keep_dims is deprecated, use keepdims instead WARNING:tensorflow:From E:\OpenFaceSwap\python\python-3.6.3.amd64\lib\site-packages\keras\backend\tensorflow_backend.py:1349: calling reduce_mean (from tensorflow.python.ops.math_ops) with keep_dims is deprecated and will be removed in a future version. Instructions for updating: keep_dims is deprecated, use keepdims instead 2018-10-22 17:44:25.332073: I T:\src\github\tensorflow\tensorflow\core\platform\cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 2018-10-22 17:44:25.336392: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:1344] Found device 0 with properties: name: GeForce GTX 1050 Ti major: 6 minor: 1 memoryClockRate(GHz): 1.392 pciBusID: 0000:01:00.0 totalMemory: 4.00GiB freeMemory: 3.26GiB 2018-10-22 17:44:25.340128: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:1423] Adding visible gpu devices: 0 2018-10-22 17:44:25.833461: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:911] Device interconnect StreamExecutor with strength 1 edge matrix: 2018-10-22 17:44:25.835191: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:917] 0 2018-10-22 17:44:25.836278: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:930] 0: N 2018-10-22 17:44:25.837483: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:1041] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 2996 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1050 Ti, pci bus id: 0000:01:00.0, compute capability: 6.1) loaded model weights Loading Trainer from Model_LowMem plugin... Starting. Press "Enter" to stop training and save model 2018-10-22 17:44:31.797969: W T:\src\github\tensorflow\tensorflow\core\common_runtime\bfc_allocator.cc:219] Allocator (GPU_0_bfc) ran out of memory trying to allocate 1.29GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available. 2018-10-22 17:44:31.895882: W T:\src\github\tensorflow\tensorflow\core\common_runtime\bfc_allocator.cc:219] Allocator (GPU_0_bfc) ran out of memory trying to allocate 1.56GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available. 2018-10-22 17:44:32.246701: W T:\src\github\tensorflow\tensorflow\core\common_runtime\bfc_allocator.cc:219] Allocator (GPU_0_bfc) ran out of memory trying to allocate 1.55GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available. 2018-10-22 17:44:32.346266: W T:\src\github\tensorflow\tensorflow\core\common_runtime\bfc_allocator.cc:219] Allocator (GPU_0_bfc) ran out of memory trying to allocate 1.09GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available. 2018-10-22 17:44:32.443339: W T:\src\github\tensorflow\tensorflow\core\common_runtime\bfc_allocator.cc:219] Allocator (GPU_0_bfc) ran out of memory trying to allocate 1.33GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available. 2018-10-22 17:44:32.691115: W T:\src\github\tensorflow\tensorflow\core\common_runtime\bfc_allocator.cc:219] Allocator (GPU_0_bfc) ran out of memory trying to allocate 2.52GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available. saved model weights loss_A: 0.01986, loss_B: 0.01826 saved model weights loss_A: 0.02079, loss_B: 0.01828 saved model weights loss_A: 0.01968, loss_B: 0.01832 saved model weights loss_A: 0.02018, loss_B: 0.01804 saved model weights loss_A: 0.01997, loss_B: 0.01761 saved model weights loss_A: 0.02058, loss_B: 0.01637 saved model weights loss_A: 0.01994, loss_B: 0.01780 saved model weights loss_A: 0.02049, loss_B: 0.01991 saved model weights loss_A: 0.01952, loss_B: 0.01886 saved model weights loss_A: 0.02062, loss_B: 0.01730 saved model weights loss_A: 0.02027, loss_B: 0.01608 saved model weights loss_A: 0.02063, loss_B: 0.01804 saved model weights loss_A: 0.02073, loss_B: 0.01828 saved model weights loss_A: 0.02091, loss_B: 0.01901 saved model weights loss_A: 0.02025, loss_B: 0.01833 saved model weights loss_A: 0.01971, loss_B: 0.01845 saved model weights loss_A: 0.02009, loss_B: 0.01691 saved model weights loss_A: 0.01945, loss_B: 0.01716 saved model weights loss_A: 0.02096, loss_B: 0.01735 saved model weights loss_A: 0.01957, loss_B: 0.01728 saved model weights loss_A: 0.02034, loss_B: 0.01923 saved model weights loss_A: 0.02006, loss_B: 0.01778 saved model weights loss_A: 0.02109, loss_B: 0.01872 saved model weights loss_A: 0.01896, loss_B: 0.01840 saved model weights loss_A: 0.01997, loss_B: 0.01749 saved model weights loss_A: 0.02006, loss_B: 0.01722 saved model weights loss_A: 0.02008, loss_B: 0.01749 saved model weights loss_A: 0.02056, loss_B: 0.01779 saved model weights loss_A: 0.01963, loss_B: 0.01614 saved model weights loss_A: 0.02021, loss_B: 0.01824 saved model weights loss_A: 0.02078, loss_B: 0.01788 saved model weights loss_A: 0.01948, loss_B: 0.01712 saved model weights loss_A: 0.01895, loss_B: 0.01753 saved model weights loss_A: 0.01984, loss_B: 0.01802 saved model weights loss_A: 0.01963, loss_B: 0.01874 saved model weights loss_A: 0.01956, loss_B: 0.01743 saved model weights loss_A: 0.01931, loss_B: 0.01725 saved model weights loss_A: 0.01905, loss_B: 0.01762 saved model weights loss_A: 0.02071, loss_B: 0.01788 saved model weights loss_A: 0.01886, loss_B: 0.01691 saved model weights loss_A: 0.01940, loss_B: 0.01793 saved model weights loss_A: 0.01874, loss_B: 0.01907 saved model weights loss_A: 0.02021, loss_B: 0.01730 saved model weights loss_A: 0.02172, loss_B: 0.01713 saved model weights loss_A: 0.02035, loss_B: 0.01703 saved model weights loss_A: 0.01982, loss_B: 0.01808 saved model weights loss_A: 0.01882, loss_B: 0.01710 saved model weights loss_A: 0.01943, loss_B: 0.01831 saved model weights loss_A: 0.01956, loss_B: 0.01896 saved model weights loss_A: 0.01956, loss_B: 0.01765 saved model weights loss_A: 0.01978, loss_B: 0.01687 saved model weights loss_A: 0.02070, loss_B: 0.01771 saved model weights loss_A: 0.02040, loss_B: 0.01712 saved model weights loss_A: 0.01983, loss_B: 0.01697 saved model weights loss_A: 0.02021, loss_B: 0.01715 saved model weights loss_A: 0.01910, loss_B: 0.01792 saved model weights loss_A: 0.01902, loss_B: 0.01699 saved model weights loss_A: 0.02112, loss_B: 0.01787 saved model weights loss_A: 0.02038, loss_B: 0.01860 saved model weights loss_A: 0.01976, loss_B: 0.01733 saved model weights loss_A: 0.01875, loss_B: 0.01794 saved model weights loss_A: 0.01951, loss_B: 0.01660 saved model weights loss_A: 0.02059, loss_B: 0.01675 saved model weights loss_A: 0.01959, loss_B: 0.01740 saved model weights loss_A: 0.02083, loss_B: 0.01723 saved model weights loss_A: 0.01969, loss_B: 0.01724 saved model weights loss_A: 0.02016, loss_B: 0.01719 saved model weights loss_A: 0.01854, loss_B: 0.01678 saved model weights loss_A: 0.01947, loss_B: 0.01759 saved model weights loss_A: 0.01952, loss_B: 0.01808 saved model weights loss_A: 0.01980, loss_B: 0.01756 saved model weights loss_A: 0.01947, loss_B: 0.01779 saved model weights loss_A: 0.01949, loss_B: 0.01788 saved model weights loss_A: 0.02058, loss_B: 0.01756 saved model weights loss_A: 0.01906, loss_B: 0.01808 saved model weights loss_A: 0.01881, loss_B: 0.01810 saved model weights loss_A: 0.01832, loss_B: 0.01661 saved model weights loss_A: 0.02063, loss_B: 0.01702 saved model weights loss_A: 0.01968, loss_B: 0.01647 saved model weights loss_A: 0.01919, loss_B: 0.01734 saved model weights loss_A: 0.01955, loss_B: 0.01803 saved model weights loss_A: 0.01942, loss_B: 0.01574 saved model weights loss_A: 0.02000, loss_B: 0.01788 saved model weights loss_A: 0.02007, loss_B: 0.01817 saved model weights loss_A: 0.02049, loss_B: 0.01811 saved model weights loss_A: 0.01859, loss_B: 0.01852 saved model weights loss_A: 0.01907, loss_B: 0.01625 saved model weights loss_A: 0.01928, loss_B: 0.01829 saved model weights loss_A: 0.02147, loss_B: 0.01849 saved model weights loss_A: 0.02059, loss_B: 0.01741 saved model weights loss_A: 0.02000, loss_B: 0.01665 saved model weights loss_A: 0.01997, loss_B: 0.01694 saved model weights loss_A: 0.01946, loss_B: 0.01731 saved model weights loss_A: 0.01899, loss_B: 0.01709 saved model weights loss_A: 0.01901, loss_B: 0.01721 saved model weights loss_A: 0.02002, loss_B: 0.01764 saved model weights loss_A: 0.01920, loss_B: 0.01653 saved model weights loss_A: 0.01942, loss_B: 0.01762 saved model weights loss_A: 0.01891, loss_B: 0.01696 saved model weights loss_A: 0.01942, loss_B: 0.01751 saved model weights loss_A: 0.01896, loss_B: 0.01701 saved model weights loss_A: 0.01990, loss_B: 0.01704 saved model weights loss_A: 0.02009, loss_B: 0.01730 saved model weights loss_A: 0.01948, loss_B: 0.01716 saved model weights loss_A: 0.01890, loss_B: 0.01766 saved model weights loss_A: 0.01844, loss_B: 0.01742 saved model weights loss_A: 0.01983, loss_B: 0.01623 saved model weights loss_A: 0.02000, loss_B: 0.01732 saved model weights loss_A: 0.01945, loss_B: 0.01774 saved model weights loss_A: 0.02021, loss_B: 0.01652 saved model weights loss_A: 0.01952, loss_B: 0.01648 saved model weights loss_A: 0.01978, loss_B: 0.01628 saved model weights loss_A: 0.02050, loss_B: 0.01711 saved model weights loss_A: 0.01920, loss_B: 0.01746 saved model weights loss_A: 0.02015, loss_B: 0.01727 saved model weights loss_A: 0.01892, loss_B: 0.01817 saved model weights loss_A: 0.01994, loss_B: 0.01706 saved model weights loss_A: 0.01930, loss_B: 0.01696 saved model weights loss_A: 0.01900, loss_B: 0.01761 saved model weights loss_A: 0.01920, loss_B: 0.01786 saved model weights loss_A: 0.01880, loss_B: 0.01725 saved model weights loss_A: 0.01968, loss_B: 0.01812 saved model weights loss_A: 0.01909, loss_B: 0.01711 saved model weights loss_A: 0.02042, loss_B: 0.01671 saved model weights loss_A: 0.01846, loss_B: 0.01721 saved model weights loss_A: 0.01986, loss_B: 0.01719 saved model weights loss_A: 0.01897, loss_B: 0.01630 saved model weights loss_A: 0.01863, loss_B: 0.01771 saved model weights loss_A: 0.01940, loss_B: 0.01648 saved model weights loss_A: 0.01952, loss_B: 0.01724 saved model weights loss_A: 0.01885, loss_B: 0.01768 saved model weights loss_A: 0.01940, loss_B: 0.01836 saved model weights loss_A: 0.01868, loss_B: 0.01770 saved model weights loss_A: 0.01974, loss_B: 0.01741 saved model weights loss_A: 0.01941, loss_B: 0.01723 saved model weights loss_A: 0.02001, loss_B: 0.01746 saved model weights loss_A: 0.01874, loss_B: 0.01606 saved model weights loss_A: 0.02055, loss_B: 0.01697 saved model weights loss_A: 0.01894, loss_B: 0.01764 saved model weights loss_A: 0.01981, loss_B: 0.01750 saved model weights loss_A: 0.01987, loss_B: 0.01817 saved model weights loss_A: 0.01981, loss_B: 0.01672 saved model weights loss_A: 0.01860, loss_B: 0.01668 saved model weights loss_A: 0.01943, loss_B: 0.01633 saved model weights loss_A: 0.01874, loss_B: 0.01779 saved model weights loss_A: 0.01853, loss_B: 0.01656 saved model weights loss_A: 0.02005, loss_B: 0.01632 saved model weights loss_A: 0.01955, loss_B: 0.01718 saved model weights loss_A: 0.01906, loss_B: 0.01741 saved model weights loss_A: 0.01844, loss_B: 0.01663 saved model weights loss_A: 0.01926, loss_B: 0.01647 saved model weights loss_A: 0.01996, loss_B: 0.01635 saved model weights loss_A: 0.01882, loss_B: 0.01639 saved model weights loss_A: 0.01998, loss_B: 0.01748 saved model weights loss_A: 0.01909, loss_B: 0.01680 saved model weights loss_A: 0.01853, loss_B: 0.01746 saved model weights loss_A: 0.01929, loss_B: 0.01665 saved model weights loss_A: 0.01901, loss_B: 0.01741 saved model weights loss_A: 0.01984, loss_B: 0.01689 saved model weights loss_A: 0.01886, loss_B: 0.01650 saved model weights loss_A: 0.01855, loss_B: 0.01784 saved model weights loss_A: 0.02059, loss_B: 0.01744 saved model weights loss_A: 0.01983, loss_B: 0.01772 saved model weights loss_A: 0.01902, loss_B: 0.01722 saved model weights loss_A: 0.01904, loss_B: 0.01705 saved model weights loss_A: 0.01915, loss_B: 0.01689 saved model weights loss_A: 0.01912, loss_B: 0.01555 saved model weights loss_A: 0.02005, loss_B: 0.01775 saved model weights loss_A: 0.01969, loss_B: 0.01804 saved model weights loss_A: 0.01900, loss_B: 0.01691 saved model weights loss_A: 0.01881, loss_B: 0.01645 saved model weights loss_A: 0.01877, loss_B: 0.01712 saved model weights loss_A: 0.01787, loss_B: 0.01663 saved model weights loss_A: 0.02017, loss_B: 0.01691 saved model weights loss_A: 0.01888, loss_B: 0.01719 saved model weights loss_A: 0.01910, loss_B: 0.01640 saved model weights loss_A: 0.01909, loss_B: 0.01660 saved model weights loss_A: 0.01879, loss_B: 0.01717 saved model weights loss_A: 0.01948, loss_B: 0.01725 saved model weights loss_A: 0.01907, loss_B: 0.01656 saved model weights loss_A: 0.01965, loss_B: 0.01621 saved model weights loss_A: 0.01891, loss_B: 0.01725 saved model weights loss_A: 0.02028, loss_B: 0.01704 saved model weights loss_A: 0.01886, loss_B: 0.01844 saved model weights loss_A: 0.01868, loss_B: 0.01757 saved model weights loss_A: 0.01842, loss_B: 0.01665 saved model weights loss_A: 0.01873, loss_B: 0.01612 saved model weights loss_A: 0.01888, loss_B: 0.01847 saved model weights loss_A: 0.01835, loss_B: 0.01706 saved model weights loss_A: 0.01851, loss_B: 0.01609 saved model weights loss_A: 0.01827, loss_B: 0.01693 saved model weights loss_A: 0.01858, loss_B: 0.01558 saved model weights loss_A: 0.01894, loss_B: 0.01773 saved model weights loss_A: 0.01916, loss_B: 0.01643 saved model weights loss_A: 0.01938, loss_B: 0.01693 saved model weights loss_A: 0.01906, loss_B: 0.01633 saved model weights loss_A: 0.01883, loss_B: 0.01715 saved model weights loss_A: 0.01889, loss_B: 0.01688 saved model weights loss_A: 0.01878, loss_B: 0.01728 saved model weights loss_A: 0.01941, loss_B: 0.01718 saved model weights loss_A: 0.01852, loss_B: 0.01789 saved model weights loss_A: 0.01956, loss_B: 0.01705 saved model weights loss_A: 0.01868, loss_B: 0.01645 saved model weights loss_A: 0.01965, loss_B: 0.01629 saved model weights loss_A: 0.01853, loss_B: 0.01654 saved model weights loss_A: 0.01882, loss_B: 0.01811 saved model weights loss_A: 0.01868, loss_B: 0.01693 saved model weights loss_A: 0.01870, loss_B: 0.01768 saved model weights loss_A: 0.01861, loss_B: 0.01710 saved model weights loss_A: 0.01810, loss_B: 0.01890 saved model weights loss_A: 0.01777, loss_B: 0.01734 saved model weights loss_A: 0.01867, loss_B: 0.01751 saved model weights loss_A: 0.01911, loss_B: 0.01526 saved model weights loss_A: 0.01909, loss_B: 0.01785 saved model weights loss_A: 0.02007, loss_B: 0.01606 saved model weights loss_A: 0.02028, loss_B: 0.01653 saved model weights loss_A: 0.01834, loss_B: 0.01599 saved model weights loss_A: 0.01986, loss_B: 0.01720 saved model weights loss_A: 0.01796, loss_B: 0.01737 saved model weights loss_A: 0.01888, loss_B: 0.01692 saved model weights loss_A: 0.01975, loss_B: 0.01714 saved model weights loss_A: 0.01912, loss_B: 0.01772 saved model weights loss_A: 0.01903, loss_B: 0.01682 saved model weights loss_A: 0.01894, loss_B: 0.01680 saved model weights loss_A: 0.01917, loss_B: 0.01682 saved model weights loss_A: 0.01927, loss_B: 0.01759 saved model weights loss_A: 0.01857, loss_B: 0.01606 saved model weights loss_A: 0.02021, loss_B: 0.01592 saved model weights loss_A: 0.01841, loss_B: 0.01711 saved model weights loss_A: 0.01883, loss_B: 0.01687 saved model weights loss_A: 0.01923, loss_B: 0.01617 saved model weights loss_A: 0.01841, loss_B: 0.01625 saved model weights loss_A: 0.01933, loss_B: 0.01560 saved model weights loss_A: 0.01815, loss_B: 0.01614 [