Skip to content

HKUST-NISL/OMG-Emotion-Challenge-2018

Repository files navigation

A-Project-on-OMG-Emotion-Challenge-2018

This is the code repository for the OMG emotion challenge 2018.
Arxiv paper: Multimodal Utterance-level Affect Analysis using Visual, Audio and Text Features

Prerequisite

To run this code, you need to install these libraries first

Instructions

In data prepration, all videos will be downloaded, and splitted into utterances into /Videos/Train, /Videos/Validation,/Video/Test (the csv files for train, validation, test set can be requested from OMG emotion challenge)

  1. data_preparation: run python create_videoset.py

In feature extraction, the features for three modal are extracted

  1. feature_extraction:

In experiment:

  • data.py provides normalized features and labels.
  • models.py contains definitions of unimodal models, trimodal models in late and early fusion.
  • functions.py defines some custom functions used as loss function or metric.
  • train.py: train and evaluation.

Multimodal Fusion

Early Fusion

early fusion

Late Fusion

late fusion

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published