Skip to content

MaxyLee/3AM

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

8 Commits
 
 
 
 

Repository files navigation

3AM: An Ambiguity-Aware Multi-Modal Machine Translation Dataset

We introduce 3AM: an ambiguity-aware multimodal machine translation dataset with ~26K image-text pairs for multimodal machine translation. Compared with previous MMT datasets, our dataset encompasses a greater diversity of caption styles and a wider range of visual concepts. Please check out our paper for more details.

Download

Please download the dataset at here. The text data is also available at the data folder.

Training

Selective Attention

The code for training the selective attention model is available here, which is base on fairseq-mmt.

# train
bash train_mmt.sh
# test
bash translate_mmt.sh

VL-Bart, VL-T5

The code for training the VL-Bart and VL-T5 model is available here, which is based on VL-T5.

# VL-Bart
bash scripts/MMT_VLBart.sh
# VL-T5
bash scripts/MMT_VLT5.sh

Contact

If you have any questions, please email yc27434@umac.mo.

Citation

If you use this dataset in your research, please cite:

TODO

About

Official code and data of "3AM: An Ambiguity-Aware Multi-Modal Machine Translation Dataset"

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published