Skip to content

zeroQiaoba/MERTools

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

MERTools

Correspondence to:

Environment

conda env create -f environment.yml
  • If raise errors about "OSError: libtorch_cuda_cpp.so: cannot open shared object file: No such file or directory", please run "pip install -U torch torchaudio --no-cache-dir"
  • If your Cuda version is low (such as 10.2), please check the install instructions for pytorch-relate packages in "https://pytorch.org/get-started/previous-versions"

MER2023

1. Dataset

To download the dataset, please fill out an EULA and send it to lianzheng2016@ia.ac.cn. It requires participants to use this dataset only for academic research and not to edit or upload samples to the Internet.

2. Baseline

MER 2023: Multi-label Learning, Modality Robustness, and Semi-Supervised Learning
Zheng Lian, Haiyang Sun, Licai Sun, Jinming Zhao, Ye Liu, Bin Liu, Jiangyan Yi, Meng Wang, Erik Cambria, Guoying Zhao, Björn W. Schuller, Jianhua Tao

Please cite our paper if you find our work useful for your research:

@inproceedings{lian2023mer,
  title={Mer 2023: Multi-label learning, modality robustness, and semi-supervised learning},
  author={Lian, Zheng and Sun, Haiyang and Sun, Licai and Chen, Kang and Xu, Mngyu and Wang, Kexin and Xu, Ke and He, Yu and Li, Ying and Zhao, Jinming and others},
  booktitle={Proceedings of the 31st ACM International Conference on Multimedia},
  pages={9610--9614},
  year={2023}
}

code: see ./MER2023

3. Website

http://merchallenge.cn/

MERBench

MERBench: A Unified Evaluation Benchmark for Multimodal Emotion Recognition
Zheng Lian, Licai Sun, Yong Ren, Hao Gu, Haiyang Sun, Lan Chen, Bin Liu, Jianhua Tao

Please cite our paper if you find our work useful for your research:

@article{lian2023mer,
  title={MERBench: A Unified Evaluation Benchmark for Multimodal Emotion Recognition},
  author={Lian, Zheng and Sun, Licai and Ren, Yong and Gu, Hao and Sun, Haiyang and Chen, Lan and Liu, Bin and Tao, Jianhua},
  journal={arXiv:2401.03429},
  year={2024}
}

code: see ./MERBench

MER2024

1. Dataset

To download the dataset, please fill out an [EULA](coming soon) and send it to our official email address merchallenge.contact@gmail.com. It requires participants to use this dataset only for academic research and not to edit or upload samples to the Internet.

2. Baseline

MER 2024: Semi-Supervised Learning, Noise Robustness, and Open-Vocabulary Multimodal Emotion Recognition

Please cite our paper if you find our work useful for your research:

@article{lian2024mer,
  title={MER 2024: Semi-Supervised Learning, Noise Robustness, and Open-Vocabulary Multimodal Emotion Recognition},
  author={Lian, Zheng and Sun, Haiyang and Sun, Licai and etc},
  year={2024}
}

code: see ./MER2024

3. Website

https://zeroqiaoba.github.io/MER2024-website/

About

Toolkits for Multimodal Emotion Recognition

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published