Skip to content
Scaling and Benchmarking Self-Supervised Visual Representation Learning
Branch: master
Clone or download
Latest commit 8a0f8b7 Jun 12, 2019
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
.github Initial commit Jun 6, 2019
configs Initial commit Jun 6, 2019
demo Initial commit Jun 6, 2019
extra_scripts Initial commit Jun 6, 2019
self_supervision_benchmark Initial commit Jun 6, 2019
third-party Update submodule Jun 6, 2019
tools Initial commit Jun 6, 2019
.gitignore Initial commit Jun 6, 2019
.gitmodules Initial commit Jun 6, 2019
CODE_OF_CONDUCT.md Initial commit Jun 6, 2019
CONTRIBUTING.md Initial commit Jun 6, 2019
GETTING_STARTED.md Initial commit Jun 6, 2019
INSTALL.md Initial commit Jun 6, 2019
LICENSE Initial commit Jun 6, 2019
MODEL_ZOO.md Initial commit Jun 6, 2019
README.md Update README.md Jun 12, 2019
setup.py Initial commit Jun 6, 2019

README.md

FAIR Self-Supervision Benchmark

This code provides various benchmark (and legacy) tasks for evaluating quality of visual representations learned by various self-supervision approaches. This code corresponds to our work on Scaling and Benchmarking Self-Supervised Visual Representation Learning. The code is written in Python and uses Caffe2 frontend as available in PyTorch 1.0. We hope that this benchmark release will provided a consistent evaluation strategy that will allow measuring the progress in self-supervision easily.

Introduction

The goal of fair_self_supervision_benchmark is to standardize the methodology for evaluating quality of visual representations learned by various self-supervision approaches. Further, it provides evaluation on a variety of tasks as follows:

Benchmark tasks: The benchmark tasks are based on principle: a good representation (1) transfers to many different tasks, and, (2) transfers with limited supervision and limited fine-tuning. The tasks are as follows.

Image Classification Object Detection Surface Normal Estimation Visual Navigation

These Benchmark tasks use the network architectures:

Legacy tasks: We also classify some commonly used evaluation tasks as legacy tasks for reasons mentioned in Section 7 of paper. The tasks are as follows:

License

fair_self_supervision_benchmark is CC-NC 4.0 International licensed, as found in the LICENSE file.

Citation

If you use fair_self_supervision_benchmark in your research or wish to refer to the baseline results published in the paper, please use the following BibTeX entry.

@article{goyal2019scaling,
  title={Scaling and Benchmarking Self-Supervised Visual Representation Learning},
  author={Goyal, Priya and Mahajan, Dhruv and Gupta, Abhinav and Misra, Ishan},
  journal={arXiv preprint arXiv:1905.01235},
  year={2019}
}

Installation

Please find installation instructions in INSTALL.md.

Getting Started

After installation, please see GETTING_STARTED.md for how to run various benchmark tasks.

Model Zoo

We provide models used in our paper in the MODEL_ZOO.

References

You can’t perform that action at this time.