Skip to content
master
Switch branches/tags
Code

Files

Permalink
Failed to load latest commit information.
Type
Name
Latest commit message
Commit time
Apr 28, 2020
Apr 8, 2020
Apr 8, 2020
Apr 8, 2020

Code for Factorized Multimodal Transformer

This is a transformer encoder model where there are multiple attention groups in each encoder layer. An attention group focuses on a unique combination of modalities, ranging from 1 modality to all three modalities. Each attention group share the number of heads. There is a unique set of convolutions for each head.

About

Factorized Multimodal Transformer for Multimodal Sequential Learning

Resources

Releases

No releases published

Packages

No packages published