Skip to content
Submission to the Affective Behavior Analysis in-the-wild (ABAW) 2020 competition.
Python
Branch: master
Clone or download

Latest commit

Fetching latest commit…
Cannot retrieve the latest commit at this time.

Files

Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
.gitignore
LICENSE
README.md
aff2compdataset.py
clip_transforms.py
create_database.py
face_alignment.py
test_val_aff2.py
tsav.py
utils.py
video.py
write_labelfile.py

README.md

TSAV Affect Analysis in the Wild (ABAW2020 submission)

Two-Stream Aural-Visual Affect Analysis in the Wild

(Submission to the Affective Behavior Analysis in-the-wild (ABAW) 2020 competition)

Getting Started

Required packages:

PyTorch 1.4, Torchaudio 0.4.0, tqdm, Numpy, OpenCV 4.2.0

You also need

mkvmerge, mkvextract (from mkvtoolnix)

ffmpeg

sox

Testing

To reproduce the competition results, download our model and alignment files:
Model and Alignment data

You need the original videos from ABAW.

Clone the repository and extract the data before running create_database.py.

create_database.py extracts and aligns the faces and audio files from the Aff-Wild2 videos.

test_val_aff2.py produces the val and test label files.

Please make sure to check the paths in both files.

Be aware, the whole process takes long time and some disk space.

(Database creation: face extraction, face-alignment, mask rendering, audio extraction, 3 hours+ and about 31 GiB) (Model inference: 7 hours on RTX 2080 Ti)

Citation

Please cite our paper in your publications if the paper/our code or our database alignment/mask data helps your research:

we will update the reference as soon as the full paper is published

@misc{kuhnke2020twostream,
    title={Two-Stream Aural-Visual Affect Analysis in the Wild},
    author={Felix Kuhnke and Lars Rumberg and J{\"o}rn Ostermann},
    year={2020},
    eprint={2002.03399},
    archivePrefix={arXiv},
    primaryClass={cs.CV}
}

Link to the paper:

TSAV

Model and alignment data is restricted for research purposes only. By using the dataset, code or alignments, please acknowledge the effort by citing the corresponding papers.

You can’t perform that action at this time.