Official implementation of "Multi-scale Bottleneck Transformer for Weakly Supervised Multimodal Violence Detection"
-
Updated
Nov 6, 2024 - Python
Official implementation of "Multi-scale Bottleneck Transformer for Weakly Supervised Multimodal Violence Detection"
[Research] Multimodal Emotion Recognition for On-device AI
SuperYOLO is accepted by TGRS
VAPOR: Legged Robot Navigation in Outdoor Vegetation using Offline Reinforcement Learning (ICRA2024)
We propose Multi-Modal Segmentation TransFormer (MMSFormer) that incorporates a novel fusion strategy to perform multimodal material segmentation.
A Transferability-guided Protein-Ligand Interaction Prediction Method
[FR|EN - Trio] 2023 - 2024 Centrale Méditerranée AI Master | Multimodal retranscription with text, audio and video
MIntRec: A New Dataset for Multimodal Intent Recognition (ACM MM 2022)
The codebase for our paper on Multi-modal Medical Dialogue Summarization
Repository for context based emotion recognition
[CVAMD 2021] "End-to-End Learning of Fused Image and Non-Image Feature for Improved Breast Cancer Classification from MRI"
A generalized self-supervised training paradigm for unimodal and multimodal alignment and fusion.
This repository contains the dataset and baselines explained in the paper: M2H2: A Multimodal Multiparty Hindi Dataset For HumorRecognition in Conversations
Multimodal sentiment analysis using hierarchical fusion with context modeling
This repository contains the official implementation code of the paper Improving Multimodal Fusion with Hierarchical Mutual Information Maximization for Multimodal Sentiment Analysis, accepted at EMNLP 2021.
FusionBrain Challenge 2.0: creating multimodal multitask model
Few-Shot malware classification using fused features of static analysis and dynamic analysis (基于静态+动态分析的混合特征的小样本恶意代码分类框架)
Code on selecting an action based on multimodal inputs. Here in this case inputs are voice and text.
Deep-HOSeq: Deep Higher-Order Sequence Fusion for Multimodal Sentiment Analysis.
Add a description, image, and links to the multimodal-fusion topic page so that developers can more easily learn about it.
To associate your repository with the multimodal-fusion topic, visit your repo's landing page and select "manage topics."