Download lipsync_v4_73.mat and shape_predictor_68_face_landmarks.dat
This Project is forked from this Repository
Follow this Blog for step by step explanation.
Research paper refered throughout this project: https://www.robots.ox.ac.uk/~vgg/publications/2016/Chung16a/chung16a.pdf
They focused on determining the audio-video synchronization between mouth motion and speech in the video. They used audio-video synchronization for TV broadcasting. It's really a nice research paper, they developed a language-independent and speaker-independent solution to the lip-sync problem, without labeled data.
For the modeling and processing functions: https://github.com/voletiv/syncnet-in-keras
VidTIMIT dataset used in this project: http://conradsanderson.id.au/vidtimit
MIT