The codebase for our paper on Multi-modal Medical Dialogue Summarization
-
Updated
Dec 1, 2023 - Python
The codebase for our paper on Multi-modal Medical Dialogue Summarization
[FR|EN - Trio] 2023 - 2024 Centrale Méditerranée AI Master | Multimodal retranscription with text, audio and video
A Transferability-guided Protein-Ligand Interaction Prediction Method
We propose Multi-Modal Segmentation TransFormer (MMSFormer) that incorporates a novel fusion strategy to perform multimodal material segmentation.
Repository for context based emotion recognition
Web scraper for Wildberries + simple vectorization/multimodal embedding workflow
The code and data for the Paper 'Inferring Climate Change Stances from Multimodal Tweets' accepted by the Short Paper track of SIGIR 2024
Source code of a sample iOS app for the paper by Alfreds Lapkovskis, Natalia Nefedova & Ali Beikmohammadi (2024): Automatic Fused Multimodal Deep Learning for Plant Identification
Official implementation of "Multi-scale Bottleneck Transformer for Weakly Supervised Multimodal Violence Detection"
Repo for "Centaur: Robust Multimodal Fusion for Human Activity Recognition"
FusionBrain Challenge 2.0: creating multimodal multitask model
Source code for the paper by Alfreds Lapkovskis, Natalia Nefedova & Ali Beikmohammadi (2024): Automatic Fused Multimodal Deep Learning for Plant Identification
Multimodal sentiment analysis
Few-Shot malware classification using fused features of static analysis and dynamic analysis (基于静态+动态分析的混合特征的小样本恶意代码分类框架)
A generalized self-supervised training paradigm for unimodal and multimodal alignment and fusion.
Deep-HOSeq: Deep Higher-Order Sequence Fusion for Multimodal Sentiment Analysis.
[CVAMD 2021] "End-to-End Learning of Fused Image and Non-Image Feature for Improved Breast Cancer Classification from MRI"
This repository contains the dataset and baselines explained in the paper: M2H2: A Multimodal Multiparty Hindi Dataset For HumorRecognition in Conversations
MIntRec: A New Dataset for Multimodal Intent Recognition (ACM MM 2022)
Code on selecting an action based on multimodal inputs. Here in this case inputs are voice and text.
Add a description, image, and links to the multimodal-fusion topic page so that developers can more easily learn about it.
To associate your repository with the multimodal-fusion topic, visit your repo's landing page and select "manage topics."