This code implements the model discussed in the paper Landmark Calibration for Facial Expression. Accurately predictiong landmarks is critical for detecting subtle emotions such as anger. Here we use prinipal component analysis to calibrate landmarks. Next, we train a translation model to generate face expressions from landmarks. We show that calibration can increase the resolution of the generated image significantly.
This code is based on the Pixel Level Translation code found at: https://github.com/MayankSingal/PyTorch-Pixel-Level-Domain-Transfer
Extract the landmarks
python facial_landmarks.py -p shape_predictor_68_face_landmarks.dat -i emotion1.jpg
- p is pretrained detector (available at https://github.com/AKSHAYUBHAT/TensorFace/blob/master/openface/models/dlib/shape_predictor_68_face_landmarks.dat)
- i is input face image
Calibrate an emotion using SVD
[par_x, par_y] = calibrate(goldface, goldland, targetface, targetland)
- goldface is a high intensity emotional face
- goldland contains landmarks for goldface
- targetface is a low intensity emotional face
- targetland contains landmarks for targetface
Example calibration for Happy emotion. The first face is the Gold standard. The second face is the target ( without(red) and with(green) calibration)
Create paired training data: PID(idx)_CLEAN0_IID(idx2).jpg and PID(idx)_CLEAN1_IID(idx2+1).jpg
- idx is person id
- CLEAN0 is landmark
- CLEAN1 is face
- idx2 is optional image counter
Train the landmark to face GAN:
python train.py datadir epochs modeldir
- datadir is directory of training paired images
- epochs is number of iterations
- modeldir is directory to store models
For Testing we use the traditional CNN model that is trained using generated high intensity images !
Presentation : https://youtu.be/wTeku_xW9UE
Paper Link : https://link.springer.com/article/10.1007/s11760-021-01943-0