Skip to content

Latest commit

 

History

History
54 lines (43 loc) · 2.68 KB

README.md

File metadata and controls

54 lines (43 loc) · 2.68 KB

License: MIT

FaceSwap

Classical Approach

This repository contains code to implement an end-to-end pipeline for swapping faces from images and videos using classical and deep learning approach.

Pipeline

Undistorted

Facial Landmarks detection

Inbuilt library dlib is used which is based on SVM face detector.

Undistorted

Face Warping using Triangulation and Thin Plate Spline

Undistorted

Instructions to run FaceSwap using Traditional approach (triangulation & Thin Plate Spline):

  1. Set input1 and input 2 as the required source and destination paths in Wrapper.py main. Note:
    • If the two inputs are the same video, set input1 as video path and input2 as None
    • If one input is a video and the other an image, set input1 as video path and input2 as image path
  2. Set method as "TRI" for triangulation or "TPS" for Thin Plate Spline algorithms respectively.
  3. Run Wrapper.py using python3 Wrapper.py

Instructions to run PRNet:

  1. Download the model weight from https://drive.google.com/file/d/1UoE-XuW1SDLUjZmJPkIZ1MLxvQFgmTFH/view
  2. Place the file in ./PRNet/Data/net-data
  3. Create a conda environment or venv with python2.7, tf 1.13 (gpu version), cv2 4.2.1, numpy and dlib.
  4. In prnetWrapper.py, specify path to the source image/video and path to the destination image/video.
  5. Run in command line: python prnetWrapper.py.

Results

Facial Landmarks

Delaunay's Triangulation

Final Swap

Swap in the same frame