Skip to content

Latest commit

 

History

History

omnivore

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 
 
 
 
 
 
 

Omnivore: A Single Model for Many Visual Modalities

PWC
PWC
PWC
PWC
PWC

[paper][website]

OMNIVORE is a single vision model for many different visual modalities. It learns to construct representations that are aligned across visual modalities, without requiring training data that specifies correspondences between those modalities. Using OMNIVORE’s shared visual representation, we successfully identify nearest neighbors of left: an image (ImageNet-1K validation set) in vision datasets that contain right: depth maps (ImageNet-1K training set), single-view 3D images (ImageNet-1K training set), and videos (Kinetics-400 validation set).

This repo contains the code to run inference with a pretrained model on an image, video or RGBD image.

Model Zoo

We share checkpoints for all the Omnivore models in the paper. The models are available via torch.hub, and we also share URLs to all the checkpoints.

The details of the models, their torch.hub names / checkpoint links, and their performance is listed below.

Name IN1k Top 1 Kinetics400 Top 1 SUN RGBD Top 1 Model
Omnivore Swin T 81.2 78.9 62.3 omnivore_swinT
Omnivore Swin S 83.4 82.2 64.6 omnivore_swinS
Omnivore Swin B 84.0 83.3 65.4 omnivore_swinB
Omnivore Swin B (IN21k) 85.3 84.0 67.2 omnivore_swinB_imagenet21k
Omnivore Swin L (IN21k) 86.0 84.1 67.1 omnivore_swinL_imagenet21k

Numbers are based on Table 2. and Table 4. in the Omnivore Paper.

We also provide a torch.hub model/checkpoint file for an Omnivore model fine tuned on the Epic Kitchens 100 dataset: omnivore_swinB_epic.

Setup and Installation

Omnivore requires PyTorch and torchvision, please follow PyTorch's getting started instructions for installation. If you are using conda on a linux machine, you can follow the following instructions -

pip install .

This will install the required dependencies for you. You can alternatively install the required dependencies manually:

conda create --name omnivore python=3.8
conda activate omnivore
conda install pytorch=1.9.0 torchvision=0.10.0 torchaudio=0.9.0 cudatoolkit=11.1 -c pytorch

We also require einops, pytorchvideo and timm which can be installed via pip -

pip install einops
pip install pytorchvideo
pip install timm
Usage

The models can be loaded via torch.hub using the following command -

model = torch.hub.load("facebookresearch/omnivore", model="omnivore_swinB")

The class mappings for the datasets can be downloaded as follows:

# Imagenet
wget https://s3.amazonaws.com/deep-learning-models/image-models/imagenet_class_index.json 

# Kinetics
wget https://dl.fbaipublicfiles.com/pyslowfast/dataset/class_names/kinetics_classnames.json 

# SUN RGBD
wget https://dl.fbaipublicfiles.com/omnivore/sunrgbd_classnames.json

# Epic Kitchens
wget https://dl.fbaipublicfiles.com/omnivore/epic_action_classes.csv

The list of videos used for Kinetics-400 experiments can be found here: training and validation.

Run Inference

Follow the inference_tutorial.ipynb tutorial locally or Open in Colab for step by step instructions on how to run inference with an image, video and RGBD image.

To run the tutorial you need to install jupyter notebook. For installing it on conda, you may run the follwing:

conda install jupyter nb_conda ipykernel ipywidgets

Integrated into Huggingface Spaces 🤗 using Gradio. Try out the Web Demo Hugging Face Spaces

Replicate web demo and docker image is available! You can try loading with different checkpoints here Replicate

Citation

If this work is helpful in your research, please consider starring ⭐ us and citing:

@inproceedings{girdhar2022omnivore,
  title={{Omnivore: A Single Model for Many Visual Modalities}},
  author={Girdhar, Rohit and Singh, Mannat and Ravi, Nikhila and van der Maaten, Laurens and Joulin, Armand and Misra, Ishan},
  booktitle={CVPR},
  year={2022}
}

Contributing

We welcome your pull requests! Please see CONTRIBUTING and CODE_OF_CONDUCT for more information.

License

Omnivore is released under the CC-BY-NC 4.0 license. See LICENSE for additional details. However the Swin Transformer implementation is additionally licensed under the Apache 2.0 license (see NOTICE for additional details).