Skip to content


Switch branches/tags

Name already in use

A tag already exists with the provided branch name. Many Git commands accept both tag and branch names, so creating this branch may cause unexpected behavior. Are you sure you want to create this branch?

Latest commit


Git stats


Failed to load latest commit information.
Latest commit message
Commit time
March 18, 2021 17:26
March 18, 2021 17:26
March 18, 2021 17:26
March 18, 2021 17:26
March 18, 2021 17:26
February 21, 2023 22:56
March 18, 2021 17:26
March 18, 2021 17:26
March 18, 2021 17:26
March 18, 2021 17:26
March 18, 2021 17:26
March 18, 2021 17:26
March 18, 2021 17:26
March 18, 2021 17:26
March 18, 2021 17:26



MTAG (Modal-Temporal Attention Graph) is a GNN-based machine learning framework that can learn fusion and alignment for unaligned multimodal sequences.

Our code is written as an extension to the awesome PyTorch Geometric library. Users are encouraged to read their installation guide and documentations to understand the basics.

Our main contributions include:

  • A graph builder to construct graphs with modal and temporal edges.
  • A new GNN convolution operation called MTGATConv that uses distinct attentions for edges with distinct modality and temporal ordering. It also transforms each node based on its modality type. It is like a combination of RGCNConv and GATConv with an efficient implementation. We hope this operation can be inlcuded into PyTorch Geometric as a standard operation.
  • A TopK pooling operation to prune edges with low attention weights.


Please refer to the requirement.txt for setup.

Dataset Preperation

Download the following datasets (please copy and paste the URL to browser, as clicking the link might not work):

and put them into a desired folder (.e.g. <dataroot>). Then specify in the folder containing the data of the desired dataset. For example:

python \
--dataroot <dataroot>

Running Example


To visualize the edges:

jupyter notebook network_inference_visualize.ipynb


A more comprehensive hyperparameter list (along with each setting's performance we obtained) can be found in this Google Sheet. For any parameters that are not specified here, we used the default values in


    title = "{MTAG}: Modal-Temporal Attention Graph for Unaligned Human Multimodal Language Sequences",
    author = "Yang, Jianing  and
      Wang, Yongxin  and
      Yi, Ruitao  and
      Zhu, Yuying  and
      Rehman, Azaan  and
      Zadeh, Amir  and
      Poria, Soujanya  and
      Morency, Louis-Philippe",
    booktitle = "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
    month = jun,
    year = "2021",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "",
    pages = "1009--1021",
    abstract = "Human communication is multimodal in nature; it is through multiple modalities such as language, voice, and facial expressions, that opinions and emotions are expressed. Data in this domain exhibits complex multi-relational and temporal interactions. Learning from this data is a fundamentally challenging research problem. In this paper, we propose Modal-Temporal Attention Graph (MTAG). MTAG is an interpretable graph-based neural model that provides a suitable framework for analyzing multimodal sequential data. We first introduce a procedure to convert unaligned multimodal sequence data into a graph with heterogeneous nodes and edges that captures the rich interactions across modalities and through time. Then, a novel graph fusion operation, called MTAG fusion, along with a dynamic pruning and read-out technique, is designed to efficiently process this modal-temporal graph and capture various interactions. By learning to focus only on the important interactions within the graph, MTAG achieves state-of-the-art performance on multimodal sentiment analysis and emotion recognition benchmarks, while utilizing significantly fewer model parameters.",


Code for NAACL 2021 paper: MTAG: Modal-Temporal Attention Graph for Unaligned Human Multimodal Language Sequences








No releases published


No packages published