Skip to content
Switch branches/tags
Go to file

Latest commit

* Add multi-head loss and multi-head MLP head in MMF
* Fix fblearner output metrics in train workflow
* Add multi-head metric and processor for new taxonomy in MMF Commerce

Reviewed By: lichengunc

Differential Revision: D27071740

fbshipit-source-id: c0771bdda57f577ce2cadcd05578497132b3336c

Git stats


Failed to load latest commit information.

MMF is a modular framework for vision and language multimodal research from Facebook AI Research. MMF contains reference implementations of state-of-the-art vision and language models and has powered multiple research projects at Facebook AI Research. See full list of project inside or built on MMF here.

MMF is powered by PyTorch, allows distributed training and is un-opinionated, scalable and fast. Use MMF to bootstrap for your next vision and language multimodal research project by following the installation instructions. Take a look at list of MMF features here.

MMF also acts as starter codebase for challenges around vision and language datasets (The Hateful Memes, TextVQA, TextCaps and VQA challenges). MMF was formerly known as Pythia. The next video shows an overview of how datasets and models work inside MMF. Checkout MMF's video overview.


Follow installation instructions in the documentation.


Learn more about MMF here.


If you use MMF in your work or use any models published in MMF, please cite:

  author =       {Singh, Amanpreet and Goswami, Vedanuj and Natarajan, Vivek and Jiang, Yu and Chen, Xinlei and Shah, Meet and
                 Rohrbach, Marcus and Batra, Dhruv and Parikh, Devi},
  title =        {MMF: A multimodal framework for vision and language research},
  howpublished = {\url{}},
  year =         {2020}


MMF is licensed under BSD license available in LICENSE file