Multivariate, Multi-frequency and Multimodal: Rethinking Graph Neural Networks for Emotion Recognition in Conversation
Pytorch implementation for the paper: Multivariate, Multi-frequency and Multimodal: Rethinking Graph Neural Networks for Emotion Recognition in Conversation, CVPR 2023.
- Python 3.8.5
- torch 1.7.1
- CUDA 11.3
- torch-geometric 1.7.2
The raw data can be found at IEMOCAP and MELD.
In our paper, we use pre-extracted features. The multimodal features (including RoBERTa-based and GloVe-based textual features) are available at here.
The implementation results may vary with training machines and random seeds. We suggest that one can try different random seeds for better results.
We also provide some pre-trained checkpoints on RoBERTa-based IEMOCAP at here.
For instance, to test on IEMOCAP using the checkponts:
python -u train.py --base-model 'GRU' --dropout 0.5 --lr 0.0001 --batch-size 16 --graph_type='hyper' --epochs=0 --graph_construct='direct' --multi_modal --mm_fusion_mthd='concat_DHT' --modals='avl' --Dataset='IEMOCAP' --norm BN --testing
To train on IEMOCAP:
python -u train.py --base-model 'GRU' --dropout 0.5 --lr 0.0001 --batch-size 16 --graph_type='hyper' --epochs=80 --graph_construct='direct' --multi_modal --mm_fusion_mthd='concat_DHT' --modals='avl' --Dataset='IEMOCAP' --norm BN --num_L=3 --num_K=4
To train on MELD:
python -u train.py --base-model 'GRU' --dropout 0.4 --lr 0.0001 --batch-size 16 --graph_type='hyper' --epochs=15 --graph_construct='direct' --multi_modal --mm_fusion_mthd='concat_DHT' --modals='avl' --Dataset='MELD' --norm BN --num_L=3 --num_K=3
If you found this code useful, please cite the following paper:
@inproceedings{chen2023multivariate,
title={Multivariate, Multi-Frequency and Multimodal: Rethinking Graph Neural Networks for Emotion Recognition in Conversation},
author={Chen, Feiyu and Shao, Jie and Zhu, Shuyuan and Shen, Heng Tao},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={10761--10770},
year={2023}
}