Skip to content

nguyentthong/Paper-VLP

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

10 Commits
 
 

Repository files navigation

Papers of Vision-and-Language Pretraining (VLP)

VLP with Different Motivations

First Moves for VLP

  1. Dialog-based interactive image retrieval Xiaoxiao Guo, Hui Wu, Yu Cheng, Steven Rennie, Gerald Tesauro, Rogerio Schmidt Feris NeurIPS 2018 [pdf] [code]

  2. Bilinear attention networks Jin-Hwa Kim, Jaehyun Jun, Byoung-Tak Zhang NeurIPS 2018 [pdf] [code]

  3. Advancing state-of-the-art image recognition with deep learning on hashtags Manohar Paluri, Dhruv Mahajan, Ross Girshick, Vignesh Ramanathan [pdf]

  4. Bottom-up and top-down attention for image captioning and visual question answering Peter Anderson, Xiaodong He, Chris Buehler, Damien Teney, Mark Johnson, Stephen Gould, Lei Zhang CVPR 2018 [pdf] [code]

  5. Relation-aware graph attention network for visual question answering Linjie Li, Zhe Gan, Yu Cheng, Jingjing Liu ICCV 2019 [pdf] [code]

  6. Lxmert: Learning cross-modality encoder representations from transformers Hao Tan, Mohit Bansal EMNLP 2019 [pdf] [code]

  7. Spatio-temporal dynamics and semantic attribute enriched visual encoding for video captioning Nayyer Aafaq, Naveed Akhtar, Wei Liu, Syed Zulqarnain Gilani, Ajmal Mian CVPR 2019 [pdf]

  8. Vilbert: Pretraining task-agnostic visiolinguistic representations for vision-and-language tasks Jiasen Lu, Dhruv Batra, Devi Parikh, Stefan Lee arXiv 2019 [pdf] [code]

  9. Unicoder-vl: A universal encoder for vision and language by cross-modal pre-training Gen Li, Nan Duan, Yuejian Fang, Ming Gong, Daxin Jiang, Ming Zhou AAAI 2020 [pdf]

  10. Uniter: Universal image-text representation learning Yen-Chun Chen, Linjie Li, Licheng Yu, Ahmed El Kholy, Faisal Ahmed, Zhe Gan, Yu Cheng, Jingjing Liu ECCV 2020 [pdf] [code]

  11. Unified vision-language pre-training for image captioning and vqa Luowei Zhou, Hamid Palangi, Lei Zhang, Houdong Hu, Jason J. Corso, Jianfeng Gao AAAI 2020 [pdf] [code]

  12. Visualbert: A simple and performant baseline for vision and language Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, Kai-Wei Chang arXiv 2019 [pdf] [code]

  13. Vl-bert: Pre-training of generic visual-linguistic representations Weijie Su, Xizhou Zhu, Yue Cao, Bin Li, Lewei Lu, Furu Wei, Jifeng Dai ICLR 2020 [pdf] [code]

  14. Fusion of detected objects in text for visual question answering Chris Alberti, Jeffrey Ling, Michael Collins, David Reitter arXiv 2019 [pdf] [code]

  15. X-lxmert: Paint, caption and answer questions with multi-modal transformers Jaemin Cho, Jiasen Lu, Dustin Schwenk, Hannaneh Hajishirzi, Aniruddha Kembhavi EMNLP 2020 [pdf] [code]

  16. Iterative answer prediction with pointeraugmented multimodal transformers for textvqa Ronghang Hu, Amanpreet Singh, Trevor Darrell, Marcus Rohrbach CVPR 2020 [pdf] [code]

  17. Mural: multimodal, multitask retrieval across languages Aashi Jain, Mandy Guo, Krishna Srinivasan, Ting Chen, Sneha Kudugunta, Chao Jia, Yinfei Yang, Jason Baldridge arXiv 2021 [pdf]

  18. Align before fuse: Vision and language representation learning with momentum distillation Junnan Li, Ramprasaath R. Selvaraju, Akhilesh Deepak Gotmare, Shafiq Joty, Caiming Xiong, Steven Hoi arXiv 2021 [pdf] [code]

  19. Unimo: Towards unified-modal understanding and generation via cross-modal contrastive learning Wei Li, Can Gao, Guocheng Niu, Xinyan Xiao, Hao Liu, Jiachen Liu, Hua Wu, Haifeng Wang ACL 2021 [pdf] [code]

  20. Interbert: Vision-and-language interaction for multi-modal pretraining Junyang Lin, An Yang, Yichang Zhang, Jie Liu, Jingren Zhou, Hongxia Yang arXiv 2020 [pdf]

  21. Simvlm: Simple visual language model pretraining with weak supervision Zirui Wang, Jiahui Yu, Adams Wei Yu, Zihang Dai, Yulia Tsvetkov, Yuan Cao ICLR 2022 [pdf]

  22. Xgpt: Cross-modal generative pre-training for image captioning Qiaolin Xia, Haoyang Huang, Nan Duan, Dongdong Zhang, Lei Ji, Zhifang Sui, Edward Cui, Taroon Bharti, Xin Liu, Ming Zhou NLPCC 2021 [pdf]

Diversification of VLP Approaches

  1. Unifying vision-and-language tasks via text generation Jaemin Cho, Jie Lei, Hao Tan, Mohit Bansal ICML 2021 [pdf] [code]

  2. Lightningdot: Pre-training visual-semantic embeddings for real-time image-text retrieval Siqi Sun, Yen-Chun Chen, Linjie Li, Shuohang Wang, Yuwei Fang, Jingjing Liu NAACL 2021 [pdf] [code]

  3. Unsupervised vision-and-language pre-training without parallel images and captions Liunian Harold Li, Haoxuan You, Zhecan Wang, Alireza Zareian, Shih-Fu Chang, Kai-Wei Chang NAACL 2021 [pdf] [code]

  4. 12-in-1: Multi-task vision and language representation learning Jiasen Lu, Vedanuj Goswami, Marcus Rohrbach, Devi Parikh, Stefan Lee CVPR 2020 [pdf] [code]

  5. Ernie-vil: Knowledge enhanced vision-language representations through scene graph Fei Yu, Jiji Tang, Weichong Yin, Yu Sun, Hao Tian, Hua Wu, Haifeng Wang AAAI 2021 [pdf]

  6. Large-scale adversarial training for vision-and-language representation learning Zhe Gan, Yen-Chun Chen, Linjie Li, Chen Zhu, Yu Cheng, Jingjing Liu NeurIPS 2020 [pdf] [code_1] [code_2]

  7. Contrastive visual-linguistic pretraining Lei Shi, Kai Shuang, Shijie Geng, Peng Su, Zhengkai Jiang, Peng Gao, Zuohui Fu, Gerard de Melo, Sen Su arXiv 2020 [pdf] [code]

  8. Lamp: label augmented multimodal pretraining Jia Guo, Chen Zhu, Yilun Zhao, Heda Wang, Yao Hu, Xiaofei He, Deng Cai arXiv 2020 [pdf]

Polishing Representations in VLP

  1. Vinvl: Revisiting visual representations in vision-language models Pengchuan Zhang, Xiujun Li, Xiaowei Hu, Jianwei Yang, Lei Zhang, Lijuan Wang, Yejin Choi, Jianfeng Gao CVPR 2021 [pdf] [code]

  2. Oscar: Object-semantics aligned pre-training for vision-language tasks Xiujun Li, Xi Yin, Chunyuan Li, Pengchuan Zhang, Xiaowei Hu, Lei Zhang, Lijuan Wang, Houdong Hu, Li Dong, Furu Wei, Yejin Choi, Jianfeng Gao ECCV 2020 [pdf] [code]

  3. Learning visual representations with caption annotations Mert Bulent Sariyildiz, Julien Perez, Diane Larlus ECCV 2020 [pdf]

  4. Virtex: Learning visual representations from textual annotations Karan Desai, Justin Johnson CVPR 2021 [pdf] [code]

  5. Vokenization: Improving language understanding with contextualized, visual-grounded supervision Hao Tan, Mohit Bansal EMNLP 2020 [pdf] [code]

  6. Scaling up visual and vision-language representation learning with noisy text supervision Chao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham, Quoc V. Le, Yunhsuan Sung, Zhen Li, Tom Duerig ICML 2021 [pdf] [code]

  7. Learning transferable visual models from natural language supervision Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, Ilya Sutskever ICML 2021 [pdf] [code]

  8. Vlmo: Unified vision-language pre-training with mixture-of-modality-experts Hangbo Bao, Wenhui Wang, Li Dong, Qiang Liu, Owais Khan Mohammed, Kriti Aggarwal, Subhojit Som, Furu Wei NeurIPS 2022 [pdf] [code]

  9. Data efficient masked language modeling for vision and language Yonatan Bitton, Gabriel Stanovsky, Michael Elhadad, Roy Schwartz EMNLP 2021 findings [pdf] [code]

  10. Flava: A foundational language and vision alignment model Amanpreet Singh, Ronghang Hu, Vedanuj Goswami, Guillaume Couairon, Wojciech Galuba, Marcus Rohrbach, Douwe Kiela CVPR 2022 [pdf] [code]

  11. Multimodal few-shot learning with frozen language models Maria Tsimpoukelli, Jacob Menick, Serkan Cabi, S. M. Ali Eslami, Oriol Vinyals, Felix Hill NeurIPS 2022 [pdf] [code]

End2End VLP

  1. Pixel-bert: Aligning image pixels with text by deep multi-modal transformers Zhicheng Huang, Zhaoyang Zeng, Bei Liu, Dongmei Fu, Jianlong Fu arXiv 2020 [pdf] [code]

  2. Vilt: Vision-and-language transformer without convolution or region supervision Wonjae Kim, Bokyung Son, Ildoo Kim ICML 2021 [pdf] [code]

  3. Seeing out of the box: End-to-end pre-training for vision-language representation learning Zhicheng Huang, Zhaoyang Zeng, Yupan Huang, Bei Liu, Dongmei Fu, Jianlong Fu CVPR 2021 [pdf] [code]

  4. E2e-vlp: End-to-end vision-language pre-training enhanced by visual learning Haiyang Xu, Ming Yan, Chenliang Li, Bin Bi, Songfang Huang, Wenming Xiao, Fei Huang ACL 2021 [pdf]

  5. An image is worth 16x16 words: Transformers for image recognition at scale Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby ICLR 2020 [pdf] [code]

VLP Applications

  1. Vd-bert: A unified vision and dialog transformer with bert Yue Wang, Shafiq Joty, Michael R. Lyu, Irwin King, Caiming Xiong, Steven C.H. Hoi EMNLP2020 [pdf] [code]

  2. Towards learning a generic agent for vision-and-language navigation via pre-training Weituo Hao, Chunyuan Li, Xiujun Li, Lawrence Carin, Jianfeng Gao CVPR 2020 [pdf] [code]

  3. Vivo: Visual vocabulary pre-training for novel object captioning Xiaowei Hu, Xi Yin, Kevin Lin, Lijuan Wang, Lei Zhang, Jianfeng Gao, Zicheng Liu AAAI 2021 [pdf]

  4. Tap: Text-aware pre-training for text-vqa and text-caption Zhengyuan Yang, Yijuan Lu, Jianfeng Wang, Xi Yin, Dinei Florencio, Lijuan Wang, Cha Zhang, Lei Zhang, Jiebo Luo CVPR 2021 [pdf] [code]

  5. Kaleido-bert: Vision-language pre-training on fashion domain Mingchen Zhuge, Dehong Gao, Deng-Ping Fan, Linbo Jin, Ben Chen, Haoming Zhou, Minghui Qiu, Ling Shao CVPR 2021 [pdf] [code]

  6. Large-scale pretraining for visual dialog: A simple state-of-the-art baseline Vishvak Murahari, Dhruv Batra, Devi Parikh, Abhishek Das ECCV 2020 [pdf] [code]

  7. Reasoning over vision and language: Exploring the benefits of supplemental knowledge Violetta Shevchenko, Damien Teney, Anthony Dick, Anton van den Hengel arXiv 2021 [pdf]

  8. Multimodal review generation for recommender systems Quoc-Tuan Truong, Hady Lauw WWW 2019 [pdf]

  9. Vln bert: A recurrent vision-and-language bert for navigation Yicong Hong, Qi Wu, Yuankai Qi, Cristian Rodriguez-Opazo, Stephen Gould arXiv 2020 [pdf] [code]

  10. Curriculum learning for vision-and-language navigation Jiwen Zhang, Zhongyu Wei, Jianqing Fan, Jiajie Peng NeurIPS 2021 [pdf] [code]

  11. Show, attend and tell: Neural image caption generation with visual attention Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhutdinov, Richard Zemel, Yoshua Bengio ICML 2015 [pdf] [code]

  12. End-to-end object detection with transformers Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, Sergey Zagoruyko arXiv 2020 [pdf] [code]

  13. Egnet: Edge guidance network for salient object detection Jia-Xing Zhao, Jiangjiang Liu, Den-Ping Fan, Yang Cao, Jufeng Yang, Ming-Ming Cheng ICCV 2019 [pdf] [code]

  14. Fashionbert: Text and image matching with adaptive loss for cross-modal retrieval Dehong Gao, Linbo Jin, Ben Chen, Minghui Qiu, Peng Li, Yi Wei, Yi Hu, Hao Wang SIGIR 2020 [pdf]

Assessment of Risks in VLP

  1. A closer look at the robustness of vision-and-language pre-trained models Linjie Li, Zhe Gan, Jingjing Liu arXiv 2020 [pdf]

  2. Causal attention for vision-language tasks Xu Yang, Hanwang Zhang, Guojun Qi, Jianfei Cai CVPR 2021 [pdf] [code]

  3. Adversarial vqa: A new benchmark for evaluating the robustness of vqa models Linjie Li, Jie Lei, Zhe Gan, Jingjing Liu ICCV 2021 [pdf]

  4. Worst of both worlds: Biases compound in pre-trained vision-and-language models Amanpreet Singh, Ronghang Hu, Vedanuj Goswami, Guillaume Couairon, Wojciech Galuba, Marcus Rohrbach, Douwe Kiela CVPR 2022 [pdf]

  5. Attacking visual language grounding with adversarial examples: A case study on neural image captioning Hongge Chen, Huan Zhang, Pin-Yu Chen, Jinfeng Yi, Cho-Jui Hsieh ACL 2018 [pdf] [code]

Model Compression for VLP

  1. Minivlm: A smaller and faster vision-language model Jianfeng Wang, Xiaowei Hu, Pengchuan Zhang, Xiujun Li, Lijuan Wang, Lei Zhang, Jianfeng Gao, Zicheng Liu arXiv 2020 [pdf]

  2. Compressing visual-linguistic model via knowledge distillation Zhiyuan Fang, Jianfeng Wang, Xiaowei Hu, Lijuan Wang, Yezhou Yang, Zicheng Liu arXiv 2021 [pdf]

  3. Playing lottery tickets with vision and language Zhe Gan, Yen-Chun Chen, Linjie Li, Tianlong Chen, Yu Cheng, Shuohang Wang, Jingjing Liu, Lijuan Wang, Zicheng Liu AAAI 2022 [pdf]

Multilinguality in VLP

  1. Uc2: Universal cross-lingual cross-modal vision-and-language pre-training Mingyang Zhou, Luowei Zhou, Shuohang Wang, Yu Cheng, Linjie Li, Zhou Yu, Jingjing Liu CVPR 2021 [pdf] [code]

  2. M3p: Learning universal representations via multitask multilingual multimodal pre-training Minheng Ni, Haoyang Huang, Lin Su, Edward Cui, Taroon Bharti, Lijuan Wang, Jianfeng Gao, Dongdong Zhang, Nan Duan CVPR 2021 [pdf] [code]

  3. Multilingual multimodal pre-training for zero-shot cross-lingual transfer of vision-language models Po-Yao Huang, Mandela Patrick, Junjie Hu, Graham Neubig, Florian Metze, Alexander Hauptmann NAACL 2021 [pdf] [code]

Probing Analysis in VLP

  1. Behind the scene: Revealing the secrets of pretrained vision-and-language models Jize Cao, Zhe Gan, Yu Cheng, Licheng Yu, Yen-Chun Chen, Jingjing Liu ECCV 2020 [pdf]

  2. What does bert with vision look at? Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, Kai-Wei Chang ACL 2020 [pdf]

  3. Decoupling the role of data, attention, and losses in multimodal transformers Lisa Anne Hendricks, John Mellor, Rosalia Schneider, Jean-Baptiste Alayrac, Aida Nematzadeh MIT Press Publication 2021 [pdf] [code]

  4. Probing inter-modality: Visual parsing with self-attention for vision-and-language pre-training Hongwei Xue, Yupan Huang, Bei Liu, Houwen Peng, Jianlong Fu, Houqiang Li, Jiebo Luo NeurIPS 2021 [pdf]

  5. Analyzing compositionality in visual question answering Sanjay Subramanian, Sameer Singh, Matt Gardner ViGIL@NeurIPS 2019 [pdf]

  6. Are we pretraining it right? Digging deeper into visio-linguistic pretraining Amanpreet Singh, Vedanuj Goswami, Devi Parikh arXiv 2020 [pdf]

Datasets for VLP

  1. The Open Images Dataset V4: Unified image classification, object detection, and visual relationship detection at scale Alina Kuznetsova, Hassan Rom, Neil Alldrin, Jasper Uijlings, Ivan Krasin, Jordi Pont-Tuset, Shahab Kamali, Stefan Popov, Matteo Malloci, Alexander Kolesnikov, Tom Duerig, Vittorio Ferrari International Journal of Computer Vision 2020 [pdf] [code]

  2. Textcaps: a dataset for image captioning with reading comprehension Oleksii Sidorov, Ronghang Hu, Marcus Rohrbach, Amanpreet Singh ECCV 2020 [pdf]

  3. Blip: Bootstrapping language-image pretraining for unified vision-language understanding and generation Junnan Li, Dongxu Li, Caiming Xiong, Steven Hoi arXiv 2022 [pdf] [code]

  4. Zero-shot text-to-image generation Aditya Ramesh, Mikhail Pavlov, Gabriel Goh, Scott Gray, Chelsea Voss, Alec Radford, Mark Chen, Ilya Sutskever PMLR 2021 [pdf]

Future Directions and Prospective Issues

Enhancing Interpretability of VLP Models

  1. Is attention interpretable? Sofia Serrano, Noah A. Smith ACL 2019 [pdf]

  2. Interpretable deep learning: Interpretation, interpretability, trustworthiness, and beyond Xuhong Li, Haoyi Xiong, Xingjian Li, Xuanyu Wu, Xiao Zhang, Ji Liu, Jiang Bian, Dejing Dou arXiv 2021 [pdf] [code]

  3. Interpretable convolutional neural networks with dual local and global attention for review rating prediction Sungyong Seo, Jing Huang, Hao Yang, Yan Liu ACM Conference on Recommender Systems 2017 [pdf] [code]

Evaluation of VLP Models

  1. Vqa-lol: Visual question answering under the lens of logic Tejas Gokhale, Pratyay Banerjee, Chitta Baral, Yezhou Yang ECCV 2020 [pdf]

  2. Towards causal vqa: Revealing and reducing spurious correlations by invariant and covariant semantic editing Vedika Agarwal, Rakshith Shetty, Mario Fritz arXiv 2019[pdf]

  3. Roses are red, violets are blue... but should vqa expect them to? Corentin Kervadec, Grigory Antipov, Moez Baccouche, Christian Wolf arXiv 2020 [pdf] [code]

Survey of Vision-and-Language Models

  1. Vlp: A survey on vision-language pre-training Feilong Chen, Duzhen Zhang, Minglun Han, Xiuyi Chen, Jing Shi, Shuang Xu, Bo Xu arXiv 2022 [pdf] [code]

  2. A survey of vision-language pre-trained models Yifan Du, Zikang Liu, Junyi Li, Wayne Xin Zhao IJCAI 2022 [pdf]

  3. A survey on automatic image caption generation Shuang Bai, Shan An Neurocomputing 2018 [pdf]

  4. Automatic description generation from images: A survey of models, datasets, and evaluation measures Raffaella Bernardi, Ruket Cakici, Desmond Elliott, Aykut Erdem, Erkut Erdem, Nazli Ikizler-Cinbis, Frank Keller, Adrian Muscat, Barbara Plank Journal of Artificial Intelligence Review 2016[pdf]

  5. Deep multimodal representation learning: A survey Wenzhong Guo, Jianwen Wang, Shiping Wang IEEE Access 2019 [pdf]

  6. A comprehensive survey of deep learning for image captioning Md. Zakir Hossain, Ferdous Sohel, Mohd Fairuz Shiratuddin, Hamid Laga ACM Computing Surveys 2018 [pdf] [code]

  7. Video question-answering techniques, benchmark datasets and evaluation metrics leveraging video captioning: A comprehensive survey Khushboo Khurana, Umesh Deshpande IEEE Access 2021 [pdf]

  8. Visual to text: Survey of image and video captioning Sheng Li, Zhiqiang Tao, Kang Li, Yun Fu IEEE Transactions on Emerging Topics in Computational Intelligence 2019 [pdf]

  9. From show to tell: A survey on image captioning Matteo Stefanini, Marcella Cornia, Lorenzo Baraldi, Silvia Cascianelli, Giuseppe Fiameni, Rita Cucchiara IEEE Transactions on Pattern Analysis and Machine Intelligence 2023 [pdf]

  10. Visual question answering: A survey of methods and datasets Qi Wu, Damien Teney, Peng Wang, Chunhua Shen, Anthony Dick, Anton van den Hengel CVIU 2017 [pdf]

  11. Bridging vision and language from the video-to-text perspective: A comprehensive review Jesus Perez-Martin, Benjamin Bustos, Silvio Jamil F. Guimarães, Ivan Sipiran, Jorge Pérez, Grethel Coello Said Artificial Intelligence Review 2021 [pdf]

  12. Visual question answering: Datasets, algorithms, and future challenges Kushal Kafle, Christopher Kanan arXiv 2016 [pdf] [code]

  13. Challenges and prospects in vision and language research Kushal Kafle, Robik Shrestha, Christopher Kanan arXiv 2019 [pdf]

  14. Multimodal intelligence: Representation learning, information fusion, and applications Chao Zhang, Zichao Yang, Xiaodong He, Li Deng IEEE Journal of Selected Topics in Signal Processing 2020 [pdf]

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published