Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add VATT model #19865

Open
2 tasks done
johko opened this issue Oct 25, 2022 · 8 comments
Open
2 tasks done

Add VATT model #19865

johko opened this issue Oct 25, 2022 · 8 comments

Comments

@johko
Copy link
Contributor

johko commented Oct 25, 2022

Model description

Hey,
as discussed with @NielsRogge a few weeks back, I'd like to work on adding the "VATT: Transformers for Multimodal
Self-Supervised Learning from Raw Video, Audio and Text" model from Google.

It is basically three transformers(Video/Audio/Text) that are trained jointly in an unsupervised manner using contrastive loss functions. For downstreams tasks they fine-tune the Transformers separately, but also explore a version that shares the weights for all modalities.

For Pre-Traning they use text-video-audio triplets from HowTo100M and video-audio pairs from AudioSet. The authors describe how to fine-tune VATT for vision and audio classification tasks and provide weights for the fine-tuned versions.

The backbone for vision is ViT, for audio WaveFormTransformer and for text they are using BERT/T5

Open source status

  • The model implementation is available
  • The model weights are available

Provide useful links for the implementation

Paper: https://arxiv.org/pdf/2104.11178.pdf
GitHub: https://github.com/google-research/google-research/tree/master/vatt

@fcakyon
Copy link
Contributor

fcakyon commented Nov 20, 2022

@johko have you started implementing it?

@johko
Copy link
Contributor Author

johko commented Nov 22, 2022

@fcakyon yes I have started, but progress is still rather slow, as that is my first model contribution and I have to figure out some stuff.

@fcakyon
Copy link
Contributor

fcakyon commented Nov 22, 2022

@johko I totally understand it. Interested in your implementation since I will be using VATT in my research next year :)

Are you working on a TF implementation?

@johko
Copy link
Contributor Author

johko commented Nov 27, 2022

@johko I totally understand it. Interested in your implementation since I will be using VATT in my research next year :)

Are you working on a TF implementation?

Sorry for the late reply (again 🙈). Yes I'm working on a TF implementation. As the original repo is using it, I'm first doing that and then see about pytorch.

@fcakyon
Copy link
Contributor

fcakyon commented Nov 27, 2022

@johko, thanks for the response! I may also help with the pytorch part once you finalize the TF implementation 👍

@johko
Copy link
Contributor Author

johko commented Nov 27, 2022

@fcakyon that would be great, as my expertise is more in TF 🙂

@johko
Copy link
Contributor Author

johko commented Jan 24, 2023

Hey @NielsRogge , I'm sorry but I think I have to stop working this for good. I'd love to finish it, but every time I think now I finally have some time to do it, something else comes around 😞

I think I just can't provide a big contribution like this atm and will rather focus on smaller things. But maybe @fcakyon wants to pick up on it.

Sorry for blocking this so long.

@pretbc
Copy link

pretbc commented Sep 20, 2023

any news about VATT PyTorch implementation ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants