MELD: A Multimodal Multi-Party Dataset for Emotion Recognition in Conversation
-
Updated
Mar 10, 2024 - Python
MELD: A Multimodal Multi-Party Dataset for Emotion Recognition in Conversation
The code for our INTERSPEECH 2020 paper - Jointly Fine-Tuning "BERT-like'" Self Supervised Models to Improve Multimodal Speech Emotion Recognition
The code for our IEEE ACCESS (2020) paper Multimodal Emotion Recognition with Transformer-Based Self Supervised Feature Fusion.
Offical implementation of paper "MSAF: Multimodal Split Attention Fusion"
This repository provides an official implementation for the paper MMA-DFER: MultiModal Adaptation of unimodal models for Dynamic Facial Expression Recognition in-the-wild.
audio-text multimodal emotion recognition model which is robust to missing data
This API utilizes a pre-trained model for emotion recognition from audio files. It accepts audio files as input, processes them using the pre-trained model, and returns the predicted emotion along with the confidence score. The API leverages the FastAPI framework for easy development and deployment.
Official repo for "Multi-Corpus Emotion Recognition Method based on Cross-Modal Gated Attention Fusion" in INTERSPEECH 2024
Add a description, image, and links to the multimodal-emotion-recognition topic page so that developers can more easily learn about it.
To associate your repository with the multimodal-emotion-recognition topic, visit your repo's landing page and select "manage topics."