Skip to content

Our aim is to build an ensemble model that is based on reinforcement learning, and capable of capturing the user specific weight of each modality (visual,audio,text) in the emotional status.

Notifications You must be signed in to change notification settings

cepdnaclk/e18-4yp-Multimodal-Emotion-Prediction-Using-Reinforcement-Learning

Repository files navigation

Multimodal emotion prediction using reinforcement learning

Humans express emotions using different modalities. These modalities may include modalities such as facial expressions, vocal tone and the speech. Each person may express their emotions using different modalities in different proportions. In this work, we’ll try to propose an ensemble model that is based on reinforcement learning, that is capable of capturing the user specific weight of each modality in the expressions made by each user in predicting the emotional status.

The proposed reinforcement learning model may learn iteratively from the users’ actual feedback to adjust the weights of the modalities. The proposed approach may be compared with standard ensemble models. If it performs better than standard generalized ensemble models, it may justify using personalized reinforcement learning models to capture the personal variations in the modalities of expression.

About

Our aim is to build an ensemble model that is based on reinforcement learning, and capable of capturing the user specific weight of each modality (visual,audio,text) in the emotional status.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published