Skip to content

LiXinran6/PRC-Emo

Repository files navigation

Do LLMs Feel? Teaching Emotion Recognition with Prompts, Retrieval, and Curriculum Learning

This paper has been accepted to the AAAI 2026 Main Conference.

PRC-Emo — A unified framework integrating Prompt engineering, demonstration Retrieval, and Curriculum learning for Large Language Model (LLM)-based Emotion Recognition in Conversation (ERC).

Abstract

Emotion Recognition in Conversation (ERC) is a crucial task for understanding human emotions and enabling natural human-computer interaction. Although Large Language Models (LLMs) have recently shown great potential in this field, their ability to capture the intrinsic connections between explicit and implicit emotions remains limited. We propose a novel ERC training framework, PRC-Emo, which integrates Prompt engineering, demonstration Retrieval, and Curriculum learning, with the goal of exploring whether LLMs can effectively perceive emotions in conversational contexts. Specifically, we design emotion-sensitive prompt templates based on both explicit and implicit emotional cues to better guide the model in understanding the speaker’s psychological states. We construct the first dedicated demonstration retrieval repository for ERC, which includes training samples from widely used datasets, as well as high-quality dialogue examples generated by LLMs and manually verified. Moreover, we introduce a curriculum learning strategy into the LoRA fine-tuning process, incorporating weighted emotional shifts between same-speaker and different-speaker utterances to assign difficulty levels to dialogue samples, which are then organized in an easy-to-hard training sequence. Experimental results on two benchmark datasets—IEMOCAP and MELD—show that our method achieves new state-of-the-art (SOTA) performance, demonstrating the effectiveness and generalizability of our approach in improving LLM-based emotional understanding.

Demonstration Retrieval Repository

We release the first dedicated demonstration retrieval repository for ERC, which is a core component of the PRC-Emo framework.

This repository contains:

  • Demonstration samples constructed from widely used ERC datasets (e.g., IEMOCAP, MELD)
  • High-quality dialogue demonstrations generated by LLMs and manually verified
  • Retrieval-ready formats designed for prompt-based and RAG-style emotion recognition

Reference & Acknowledgement

This project draws inspiration from the BiosERC framework, which explores integrating speaker biographical information for enhancing ERC tasks. We sincerely appreciate their excellent work. If you would like to access more datasets beyond IEMOCAP and MELD, please visit their official repository. Please consider citing their work if you use similar ideas:

Xue, Jieying et al. “BiosERC: Integrating Biography Speakers Supported by LLMs for ERC Tasks.” International Conference on Artificial Neural Networks (2024).

@InProceedings{10.1007/978-3-031-72344-5_19,
    author    = "Xue, Jieying and Nguyen, Minh-Phuong and Matheny, Blake and Nguyen, Le-Minh",
    title     = "BiosERC: Integrating Biography Speakers Supported by LLMs for ERC Tasks",
    booktitle = "Artificial Neural Networks and Machine Learning -- ICANN 2024",
    year      = "2024",
    publisher = "Springer Nature Switzerland",
    address   = "Cham",
    pages     = "277--292",
    isbn      = "978-3-031-72344-5"
}

The introduction of implicit emotion interpretation in this work is inspired by the paper “Forecasting Implicit Emotions Elicited in Conversations” from the Miyao Laboratory at the University of Tokyo, which explores implicit emotion forecasting in dialogues. Please consider citing their work if you use related ideas:

@inproceedings{koga-etal-2024-forecasting-implicit,
    title = "Forecasting Implicit Emotions Elicited in Conversations",
    author = "Koga, Yurie  and
      Kando, Shunsuke  and
      Miyao, Yusuke",
    editor = "Mahamood, Saad  and
      Minh, Nguyen Le  and
      Ippolito, Daphne",
    booktitle = "Proceedings of the 17th International Natural Language Generation Conference",
    month = sep,
    year = "2024",
    address = "Tokyo, Japan",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2024.inlg-main.12/",
    doi = "10.18653/v1/2024.inlg-main.12",
    pages = "145--152",
}

Data

  • IEMOCAP Data structure examples:
    {
        # this is first conversation 
        "Ses05M_impro03": { 
            "labels": [
            4,
            2,
            4,
            4 
            ],
            "sentences": [
            "Guess what?",
            "what?",
            "I did it, I asked her to marry me.",
            "Yes, I did it."
            ], 
            "genders": [
            "M",
            "F",
            "M",
            "M",
            "F", 
            ]
        },
    
        # this is second conversation 
        "Ses05M_impro03": { 
            "labels": [
            4,
            2,
            ],
            "sentences": [
            "Guess what?",
            "what?", 
            ], 
            "genders": [
            "M",
            "F",  
            ]
        }
    }

Python ENV

Init python environment

    conda create --prefix=./env_py38  python=3.9
    conda activate ./env_py38 
    pip install -r requirements.txt

Run

  1. Init environment follow the above step.
  2. Train
    Run following command to train a new model.
    python src/get_rag_final.py # to get demonstration retrieval repository
    python src/llm_bio_extract_v2.py # to extract speaker bio
    python src/llm_emotion_extract_v2.py # to explicit and implicit emotion interpretations
    
    bash scripts/train_llm.sh # to train a llm model, or HF_ENDPOINT=https://hf-mirror.com bash scripts/train_llm.sh if in China

    Note: Please check this scripts to check the setting and choose which data you want to run. For IEMOCAP, MODEL_ID="Qwen2.5-7B-Instruct"; For MELD, MODEL_ID="Qwen3-8B"

Citation

If you find this work helpful, please consider citing our paper:

@article{Li_Liu_Qiao_Xu_2026,
  title={Do LLMs Feel? Teaching Emotion Recognition with Prompts, Retrieval, and Curriculum Learning},
  volume={40},
  url={https://ojs.aaai.org/index.php/AAAI/article/view/40446},
  DOI={10.1609/aaai.v40i38.40446},
  number={38},
  journal={Proceedings of the AAAI Conference on Artificial Intelligence},
  author={Li, Xinran and Liu, Yu and Qiao, Jiaqi and Xu, Xiujuan},
  year={2026},
  month={Mar.},
  pages={31778-31786}
}

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors