Skip to content
/ LAVISH Public

Vision Transformers are Parameter-Efficient Audio-Visual Learners

Notifications You must be signed in to change notification settings

GenjiB/LAVISH

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

14 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Vision Transformers are Parameter-Efficient Audio-Visual Learners

📗Paper|| 🏠Project Page

License: MIT

This is the PyTorch implementation of our paper:

Vision Transformers are Parameter-Efficient Audio-Visual Learners

Yan-Bo Lin, Yi-Lin Sung, Jie Lei, Mohit Bansal, and Gedas Bertasius

In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2023



Our Method

📝 Preparation

  • See each foloder for more detailed settings
  • Audio-Visual Event Localization: ./AVE
  • Audio-Visual Segmentation: ./AVS
  • Audio-Visual Question Answering: ./AVQA

🎓 Cite

If you use this code in your research, please cite:

@InProceedings{LAVISH_CVPR2023,
author = {Lin, Yan-Bo and Sung, Yi-Lin and Lei, Jie and Bansal, Mohit and Bertasius, Gedas},
title = {Vision Transformers are Parameter-Efficient Audio-Visual Learners},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
year = {2023}
}

👍 Acknowledgments

Our code is based on AVSBench and MUSIC-AVQA

✏ Future works: model checkpoints

Tasks Checkpoints
AVE model
AVS model
AVQA model

About

Vision Transformers are Parameter-Efficient Audio-Visual Learners

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages