Skip to content

Latest commit

 

History

History
28 lines (19 loc) · 1.68 KB

README.md

File metadata and controls

28 lines (19 loc) · 1.68 KB

SSLCL: A Computationally Efficient and Model-Agnostic Supervised Contrastive Learning Framework for Emotion Recognition in Conversations

Introduction

Supervised Sample-Label Contrastive Learning with Soft Hirschfeld-Gebelein-R{'e}nyi Maximal Correlation (SSLCL) is an efficient and model-agnostic supervised contrastive learning framework for the problem of Emotion Recognition in Conversations (ERC), which eliminates the need for a large batch size and can be seamlessly integrated with existing ERC models without introducing any model-specific assumptions. Extensive experiments on two ERC benchmark datasets, IEMOCAP and MELD, demonstrate the compatibility and superiority of our proposed SSLCL framework compared to existing state-of-the-art supervised contrastive learning (SCL) methods.

The full paper is available at https://arxiv.org/abs/2310.16676.

Model Architecture

The overall framework of SSLCL is illustrated as follows, which is made up of three key components: sample feature extraction, label learning, and sample-label contrastive learning. Figure 1: Illustration of the overall framework of SSLCL.

Citation

If you find our work helpful to your research, please cite our paper as follows.

@misc{shi2023sslcl,
      title={SSLCL: An Efficient Model-Agnostic Supervised Contrastive Learning Framework for Emotion Recognition in Conversations}, 
      author={Tao Shi and Xiao Liang and Yaoyuan Liang and Xinyi Tong and Shao-Lun Huang},
      year={2023},
      eprint={2310.16676},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}