Skip to content

Kaicheng-Yang0828/Multimodal-Sentiment-Analysis-Paper-list

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

18 Commits
 
 

Repository files navigation

Multimodal-Sentiment-Analysis(MSA) paper list

From September 27, 2019, all future readings will be recorded in the document. Because I will graduate soon, there are many things to do, so the update is slow. After a period of time, I will organize all papers in a large scale during my graduate study

Multimodal Sentiment analysis

1、ICON: Interactive Conversational Memory Network for Multimodal Emotion Detection https://www.aclweb.org/anthology/D18-1280

2、Context-Dependent Sentiment Analysis in User-Generated Videos (ACL 2017).

3、Multi-level Multiple Attentions for Contextual Multimodal Sentiment Analysis(ICDM 2017).

Code for 2、3:https://github.com/SenticNet/multimodal-fusion

4、Tensor Fusion Network for Multimodal Sentiment Analysis(EMNLP 2017)

Code for 4:https://github.com/Justin1904/TensorFusionNetworks

5、Multimodal Transformer for Unaligned Multimodal Language Sequences(ACL 2019)

Code for 5:https://github.com/yaohungt/Multimodal-Transformer

6、Memory Fusion Network for Multi-view Sequential Learning(AAAI 2018)

Code for 6:https://github.com/pliang279/MFN

7、FACTORIZED MULTIMODAL TRANSFORMER FOR MULTIMODAL SEQUENTIAL LEARNING

Code for 7:https://github.com/A2Zadeh/Factorized-Multimodal-Transformer(release April 15th, 2020)

8、Multimodal Transformer for Unaligned Multimodal Language Sequences(ACL 2019)

Code for 8:https://github.com/yaohungt/Multimodal-Transformer

9、Multimodal Sentiment Analysis using Hierarchical Fusion with Context Modeling

Code for 9:https://github.com/SenticNet/hfusion

10、Found in Translation: Learning Robust Joint Representations by Cyclic Translations Between Modalities(2019 AAAI)

Code for 10:https://github.com/hainow/MCTN

11、Words Can Shift:Dynamically AdjustingWord Representations Using Nonverbal Behaviors(2019 AAAI)

Code for 11:https://github.com/victorywys/RAVEN

12、A Transformer-based joint-encoding for Emotion Recognition and Sentiment Analysis(2020 ACL workshop)

Code for 12:https://github.com/jbdel/MOSEI_UMONS

13、Low Rank Fusion based Transformers for Multimodal Sequences(2020 ACL workshop)

14、Quantum Cognitively Motivated Decision Fusion for Video Sentiment Analysis(AAAI2021)

15、Learning Modality-Specific Representations with Self-Supervised Multi-Task Learning for Multimodal Sentiment Analysis(AAAI2021)

16、Quantum Cognitively Motivated Decision Fusion for Video Sentiment Analysis(AAAI2021)

17、An Entanglement-driven Fusion Neural Network for Video Sentiment Analysis(IJCAI2021)

18、Quantum-inspired Neural Network for Conversational Emotion Recognition(AAAI2021)

Multimodal BERT

1、VideoBERT: A Joint Model for Video and Language Representation Learning (ICCV2019)

2、ViLBERT: Pretraining Task-Agnostic Visiolinguistic Representations for Vision-and-Language Tasks (NeurIPS2019)

3、VisualBERT: A Simple and Performant Baseline for Vision and Language

4、Selfie: Self-supervised Pretraining for Image Embedding

5、Contrastive Bidirectional Transformer for Temporal Representation Learning

6、M-BERT: Injecting Multimodal Information in the BERT Structure

7、LXMERT: Learning Cross-Modality Encoder Representations from Transformers (EMNLP2019)

8、Fusion of Detected Objects in Text for Visual Question Answering (EMNLP2019)

9、Unified Vision-Language Pre-Training for Image Captioning and VQA

Code for 9:https://github.com/LuoweiZhou/VLP

10、VL-BERT: Pre-training of Generic Visual-Linguistic Representations

11、Unicoder-VL: A Universal Encoder for Vision and Language by Cross-modal Pre-training

12、UNITER: Learning UNiversal Image-TExt Representations

13、SpeechBERT: Cross-Modal Pre-trained Language Model for End-to-end Spoken Question Answering

14、Multimodal Transformer for Unaligned Multimodal Language Sequences

Code for 14:https://github.com/yaohungt/Multimodal-Transformer

15、Integrating Multimodal Information in Large Pretrained Transformers(2020 ACL)

Code for 15:https://github.com/WasifurRahman/BERT_multimodal_transformer

16、CM-BERT: Cross-Modal BERT for Text-Audio Sentiment Analysis(2020 ACMMM,ours)

Code for 16:https://github.com/thuiar/Cross-Modal-BERT

Multi-task Sentiment analysis

1、Attention-augmented end-to-end multi-task learning for emotion prediction from speech.

https://arxiv.org/pdf/1903.12424.pdf

2、Multi-task Learning for Multi-modal Emotion Recognition and Sentiment Analysis.

https://arxiv.org/pdf/1905.05812.pdf

3、Multi-task Learning for Target-dependent Sentiment Classification.

https://arxiv.org/pdf/1902.02930.pdf

4、Sentiment and Sarcasm Classification with Multitask Learning.

https://sentic.net/sentiment-and-sarcasm-classification-with-multitask-learning.pdf

Missing Modality

1、SMIL: Multimodal Learning with Severely Missing Modality(2021 AAAI)

Code for 1:https://github.com/mengmenm/SMIL

About

This paper list is about multimodal sentiment analysis.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published