Skip to content

kdhht2334/Facial-Expression-Recognition-Zoo

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

22 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Facial-Expression-Recognition-Zoo (FER-Zoo)

license Ubuntu PyThon PyTorch

FER-Zoo is a PyTorch toolbox for facial expression recognition (FER). Especially, we focus on affect estimation methods to regress valence-arousal (VA) value. This repository contains state-of-the-art (SOTA) FER frameworks as follows:

Methods Venue Link
Baseline TAC 2017 [link]
CAF AAAI 2021 [link]
AVCE ECCV 2022 [link]
ELIM NeurIPS 2022 [link]

Also, we unveil the first DiffusionFER dataset for emotion analysis generated by Stable Diffusion. The diversity of this dataset collected directly by humans provides an opportunity to be used for privacy-related face applications such as deepfake, along with deep analysis between prompts and facial expression patterns.

What's New

  • [Mar. 2023] Add open DiffusionExpression dataset created by StableDiffusion-WebUI [link].
  • [Mar. 2023] Add Baseline framework.
  • [Mar. 2023] Add training and evaluation part of FER frameworks.
  • [Mar. 2023] Initial version of FER-Zoo.

Requirements

  • python >= 3.8.0
  • pytorch >= 1.7.1
  • torchvision >= 0.8.0
  • pretrainedmodels >=0.7.4
  • fabulous >= 0.4.0
  • wandb > 0.13.0

Public Datasets

  1. Download four public benchmarks for training and evaluation (please download after agreement accepted).

(For more details visit website)

  1. Follow preprocessing rules for each dataset by referring pytorch official custom dataset tutorial.

Training

Just run the below script!

chmod 755 run.sh
./run.sh <dataset_type> <method> <gpu_no> <port_no> 
  • <dataset_type>: 2 options (custom or public).
  • <method>: 4 options (elim, avce, caf, and baseline).
  • <gpu_no>: GPU number such as 0 (or 0, 1 etc.)
  • <port_no>: port number to clarify workers (e.g., 12345)

Evaluation

  • Evaluation is performed automatically at each print_check point in training phase.

Milestone

  • Build SOTA FER frameworks
  • Upload pre-trained weights
  • Bench-marking table

Acknowledgments This repository is partially inspired by FaceX-Zoo and InsightFace_Pytorch.

Citation

If our work is useful for your work, then please consider citing below bibtex:

@inproceedings{kim2021contrastive,
    title={Contrastive adversarial learning for person independent facial emotion recognition},
    author={Kim, Daeha and Song, Byung Cheol},
    booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
    volume={35},
    number={7},
    pages={5948--5956},
    year={2021}
}

@inproceedings{kim2022emotion,
    title={Emotion-aware Multi-view Contrastive Learning for Facial Emotion Recognition},
    author={Kim, Daeha and Song, Byung Cheol},
    booktitle={Computer Vision--ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23--27, 2022, Proceedings, Part XIII},
    pages={178--195},
    year={2022},
    organization={Springer}
}

@misc{kim2022elim,
    author = {Kim, Daeha and Song, Byung Cheol},
    title = {Optimal Transport-based Identity Matching for Identity-invariant Facial Expression Recognition},
    Year = {2022},
    Eprint = {arXiv:2209.12172}
}

Contact (or collaborate)

If you have any questions or work with me, feel free to contact Daeha Kim.

Licensing

The ExpressionDiffusion dataset is available under the CC0 1.0 License. For the license of each FER framework, please refer to those of each conference society.

About

A PyTorch Toolbox for Facial Expression Recognition

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published