This repository records the proposed Multimodal Sentiment Analysis on Pytorch Framework.
“TMBL: Transformer-based multimodal binding learning model for multimodal sentiment analysis”, paper has been accepted by Knowledge-Based System.
Here is a list of the improved trackers.
- MISA: Misa: Modality-invariant and-specific representations for multimodal sentiment analysis, ACMM, 2020. [Paper]
- MMIM: Progressive image deraining networks: A better and simpler baseline, arXiv, 2021. [Paper]
- HyCon: Hybrid contrastive learning of tri-modal representation for multimodal sentiment analysis, IEEE Transactions on Affective Computing, 2022. [Paper]
- TETFN: A text enhanced transformer fusion network for multimodal sentiment analysis, Pattern Recognition, 2023. [Paper]
- AOBERT: All-modalities-in One BERT for multimodal sentiment analysis, 2023. [Paper]
-
In this study, we develop a transformer model equipped with a modality-bound learning mechanism to extract unified affective information from multiple modalities. Our proposed structure and method hold potential for broader applications in other fields related to multimodal interaction and fusion, such as multimodal disease diagnosis.
-
If you have any question, please set the issue and PR, we will process it as soon as possible~