Skip to content

arbishakram/LSRF-FES

Repository files navigation

LSRF: Localized and Sparse Receptive Fields for Linear Facial Expression Synthesis based on Global Face Context

In: Multimedia Tools and Applications (MTAP), 2023

This repository provides the official implementation of the following paper:

LSRF: Localized and Sparse Receptive Fields for Linear Facial Expression Synthesis based on Global Face Context
Arbish Akram and Nazar Khan
Department of Computer Science, University of the Punjab, Lahore, Pakistan.
Abstract: Existing generative adversarial network-based methods for facial expression synthesis require larger datasets for training. Their performance tends to decrease noticeably when trained on smaller datasets. Moreover, they demand higher computational and spatial complexity at inference, making them unsuitable for resource-constrained devices. To address these limitations, this paper presents a linear formulation to learn Localized and Sparse Receptive Fields (LSRF) for facial expression synthesis considering global face context. In this approach, we extend the sparsity-inducing formulation of the Orthogonal Matching Pursuit (OMP) algorithm by incorporating a locality constraint. This constraint ensures that i) each output pixel observes a localized region and ii) neighboring output pixels attend proximate regions of the input face image. Extensive qualitative as well as quantitative experiments demonstrate that the proposed method generates realistic facial expressions and outperforms existing methods. Further, the proposed method can be trained by employing significantly smaller datasets while exhibiting good generalization capabilities for out-of-distribution images.

Test with Pretrained Models

Test with the LSRF model when F=1

python main.py --test_dataset_dir ./testing_imgs/  --weights_dir ./pre-trained_models/ --model LSRF --image_size 128   \
               --f 1  --mode test_inthewild --results_dir ./results/                               

Test with the OMP model when F=1

python main.py --test_dataset_dir ./testing_imgs/ --weights_dir ./pre-trained_models/ --model OMP --image_size 128   \
               --f 1  --mode test_inthewild --results_dir ./results/                               

Test with the LSRF model when F=5

python main.py --test_dataset_dir ./testing_imgs/ --weights_dir ./pre-trained_models/ --model LSRF --image_size 80   \
               --f 5  --mode test_inthewild --results_dir ./results/                               

Test with the OMP model when F=5

python main.py --test_dataset_dir ./testing_imgs/ --weights_dir ./pre-trained_models/ --model OMP --image_size 80   \
               --f 5  --mode test_inthewild --results_dir ./results/                               

Train the Model

  1. Download any Facial Expression Synthesis dataset.
  2. Create a folder structure as described here.
  • Split images into training and test sets (e.g., 90%/10% for training and test, respectively).
  • Crop and align facial images, where the faces are centered.
  1. Train LSRF model
python main.py --train_dataset_dir ./train_dataset/ --weights_dir ./weights/ --model LSRF --image_size 80   \
               --f 9  --beta 60 --mode train --results_dir ./results/                                
  1. Train OMP model
python main.py --train_dataset_dir ./train_dataset/ --weights_dir ./weights/ --model OMP --image_size 80   \
               --f 9  --mode train --results_dir ./results/                                

Dependencies

  • scikit-learn == 0.22
  • joblib == 1.0
  • scipy == 1.6
  • opencv-python == 4.5

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages