User modelling using Multi-modal fusion
-
Updated
Dec 24, 2019 - Python
User modelling using Multi-modal fusion
PyTorch Implementation of HUSE: Hierarchical Universal Semantic Embeddings ( https://arxiv.org/pdf/1911.05978.pdf )
Gowers Method for finding latent networks of multi-modal data
Code for COLING2020 paper: Probing Multimodal Embeddings for Linguistic Properties.
My master thesis: Siamese multi-hop attention for cross-modal retrieval.
IISAN: Efficiently Adapting Multimodal Representation for Sequential Recommendation with Decoupled PEFT
Segment-level autoencoders for multimodal representation
Collects a multimodal dataset of Wikipedia articles and their images
The code for our INTERSPEECH 2020 paper - Jointly Fine-Tuning "BERT-like'" Self Supervised Models to Improve Multimodal Speech Emotion Recognition
Deep Multiset Canonical Correlation Analysis - An extension of CCA to multiple datasets
This repository contains the implementation of the paper -- Bi-Bimodal Modality Fusion for Correlation-Controlled Multimodal Sentiment Analysis
Real-world photo sequence question answering system (MemexQA). CVPR'18 and TPAMI'19
Add a description, image, and links to the multimodal-representation topic page so that developers can more easily learn about it.
To associate your repository with the multimodal-representation topic, visit your repo's landing page and select "manage topics."