Implementation of the paper "VistaNet: Visual Aspect Attention Network for Multimodal Sentiment Analysis"
Branch: master
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
.gitignore
LICENSE
README.md
data_preprocess.py
data_reader.py
data_utils.py
gpu_utils.py
layers.py
model.py
model_utils.py
train.py

README.md

VistaNet

This is the code for the paper:

VistaNet: Visual Aspect Attention Network for Multimodal Sentiment Analysis
Quoc-Tuan Truong and Hady W. Lauw
Presented at AAAI 2019

We provide:

  • Code to train and evaluate the model
  • Data used for the experiments

If you find the code and data useful in your research, please cite:

@inproceedings{VistaNet,
  title={VistaNet: Visual Aspect Attention Network for Multimodal Sentiment Analysis},
  author={Truong, Quoc-Tuan and Lauw, Hady W},
  publisher={AAAI Press},
  year={2019},
}

Requirements

  • Python 3
  • Tensorflow >= 1.12.0
  • Tqdm
  • GloVe word embeddings

How to run

  1. Make sure data is ready. Run script to pre-process the data:
python data_preprocess.py
  1. Train VistaNet:
python train.py --hidden_dim 50 --att_dim 100 --num_images 3 --batch_size 32 --learning_rate 0.001 --num_epochs 20

Contact

Questions and discussion are welcome: www.qttruong.info