RAMer: Reconstruction-based Adversarial Model for Multi-party Multi-model Multi-label Emotion Recognition
This is an anonymous repository for double-blind manuscript review.
-
- The model framework.
-
- List all dependencies and libraries required to run the code.
-
- Provide instructions on how to train the model, including command examples and parameter explanations.
-
- Explain the evaluation process and provide examples of how to evaluate the model's performance.
The framework of RAMer. Given incomplete multi-modal inputs, RAMer first encodes each individual modality through an auxiliary task, then feeds the features into a reconstruction-based adversarial network to extract specificity and commonality. Finally, a stacked shuffle layer is employed to learn enhanced representations.
To install requirements:
pip install -r requirements.txt
📋 Describe how to set up the environment, e.g. pip/conda/docker commands, download datasets, etc...
To train the model(s) in the paper, run this command:
sh train.sh
📋 Describe the training details, including the full training procedure and appropriate hyperparameters.
To evaluate my model on CMU-MOSEI, M3ED and MEmoR, run:
inclued in train.sh
📋 Describe the evaluation process, and give commands that produce the results.
You can download the checkpoint of models here:
- [model].
📋 Pick a licence.