Skip to content

shgnag/Audio-visual-automatic-group-affect-analysis

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 

Repository files navigation

Audio-visual Automatic Group Affect Analysis

This repository conatins the PyTorch implementation of Audio-visual automatic group affect analysis method.

Dataset

For Video Level Group AFfect (VGAF) dataset contact - abhinav.dhall@monash.edu and emotiw2014@gmail.com

Training

python VGAFNet_fusion.py

This file need to change the path of the pre-processed features as an input. For the holistic channel, frames are sampled from the original video. For the face-level channel, vggface features are extracted. Please refer the paper for more details on data pre-processing.

Citation

If you find the code useful for your research, please consider citing our work:

@article{sharma2021audio,
  title={Audio-visual automatic group affect analysis},
  author={Sharma, Garima and Dhall, Abhinav and Cai, Jianfei},
  journal={IEEE Transactions on Affective Computing},
  year={2021},
  publisher={IEEE}
}

Contact

In case of any questions, please contact garima.sharma1@monash.edu.

About

Audio-Visual Automatic Group Affect Analysis

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages