Skip to content
An end-to-end PyTorch framework for image and video classification
Jupyter Notebook Python CSS JavaScript Shell Batchfile
Branch: master
Clone or download
kazhang and facebook-github-bot Fix fine-tuning (#398)
Pull Request resolved: #398

After we wrap the model for DDP, we're not supposed to change the model parameters directly. But for fine-tuning, we would need to freeze the trunk for some use cases. We ran into errors like
> Expected to have finished reduction in the prior iteration before starting a new one. This error indicates that your module has parameters that were not used in producing loss.

Here we recreate the DDP model after freezing.

Reviewed By: vreis

Differential Revision: D20002843

fbshipit-source-id: 32972144a039857508531a382a69f58d41bf14f2
Latest commit 9c836f8 Feb 20, 2020
Type Name Latest commit message Commit time
Failed to load latest commit information.
.circleci Save dev dependencies to CircleCI's cache (#371) Feb 4, 2020
.github/ISSUE_TEMPLATE Initial commit Dec 5, 2019
bin Mark optional packages as third party in isort (#326) Dec 18, 2019
classy_vision Fix fine-tuning (#398) Feb 20, 2020
examples/ray Initial commit Dec 5, 2019
hydra_plugins Hydra improvements (#58) Jan 22, 2020
scripts Mark optional packages as third party in isort (#326) Dec 18, 2019
sphinx Initial commit Dec 5, 2019
test Fix bug in loading the final train phase checkpoint (#387) Feb 10, 2020
tutorials Take a separate argument for loading checkpoints, take paths for pret… Feb 7, 2020
website Fix website on mobile (#303) Dec 6, 2019
.gitignore Initial commit Dec 5, 2019
.pre-commit-config.yaml Initial commit Dec 5, 2019 Initial commit Dec 5, 2019 Initial commit Dec 5, 2019 Update linter config and run black after isort (#319) Dec 17, 2019
LICENSE Initial commit Dec 5, 2019 Add license to, bump master to Jan 25, 2020 Clarify that installing via conda is only supported for linux (#367) Feb 1, 2020 Fix tensorboard (#388) Feb 10, 2020 Initial commit Dec 5, 2019
requirements.txt Use fvcore PathManager to support other file protocols (#364) Feb 3, 2020
setup.cfg Update linter config and run black after isort (#319) Dec 17, 2019 Update linter config and run black after isort (#319) Dec 17, 2019

GitHub license CircleCI PRs Welcome

Classy Vision is a new end-to-end, PyTorch-based framework for large-scale training of state-of-the-art image and video classification models. Previous computer vision (CV) libraries have been focused on providing components for users to build their own frameworks for their research. While this approach offers flexibility for researchers, in production settings it leads to duplicative efforts, and requires users to migrate research between frameworks and to relearn the minutiae of efficient distributed training and data loading. Our PyTorch-based CV framework offers a better solution for training at scale and for deploying to production. It offers several notable advantages:

  • Ease of use. The library features a modular, flexible design that allows anyone to train machine learning models on top of PyTorch using very simple abstractions. The system also has out-of-the-box integration with Amazon Web Services (AWS), facilitating research at scale and making it simple to move between research and production.
  • High performance. Researchers can use the framework to train Resnet50 on ImageNet in as little as 15 minutes, for example.
  • Demonstrated success in training at scale. We’ve used it to replicate the state-of-the-art results from the paper Exploring the Limits of Weakly Supervised Pretraining.
  • Integration with PyTorch Hub. AI researchers and engineers can download and fine-tune the best publically available ImageNet models with just a few lines of code.
  • Elastic training. We have also added experimental integration with PyTorch Elastic, which allows distributed training jobs to adjust as available resources in the cluster changes. It also makes distributed training robust to transient hardware failures.

Classy Vision is beta software. The project is under active development and our APIs are subject to change in future releases.


Installation Requirements

Make sure you have an up-to-date installation of PyTorch (1.4), Python (3.6) and torchvision (0.5). If you want to use GPUs, then a CUDA installation (10.1) is also required.

Installing the latest stable release

To install Classy Vision via pip:

pip install classy_vision

To install Classy Vision via conda (only works on linux):

conda install -c conda-forge classy_vision

Manual install of latest commit on master

Alternatively you can do a manual install.

git clone
cd ClassyVision
pip install .

Getting started

Classy Vision aims to support a variety of projects to be built and open sourced on top of the core library. We provide utilities for setting up a project in a standard format with some simple generated examples to get started with. To start a new project:

classy-project my-project
cd my-project

We even include a simple, synthetic, training example to show how to use Classy Vision:

 ./ --config configs/template_config.json

Voila! A few seconds later your first training run using our classification task should be done. Check out the results in the output folder:

ls output_<timestamp>/checkpoints/
checkpoint.torch model_phase-0_end.torch model_phase-1_end.torch model_phase-2_end.torch model_phase-3_end.torch

checkpoint.torch is the latest model (in this case, same as model_phase-3_end.torch), a checkpoint is saved at the end of each phase.

For more details / tutorials see the documentation section below.


Please see our tutorials to learn how to get started on Classy Vision and customize your training runs. Full documentation is available here.

Join the Classy Vision community

See the CONTRIBUTING file for how to help out.


Classy Vision is MIT licensed, as found in the LICENSE file.

Citing Classy Vision

If you use Classy Vision in your work, please use the following BibTeX entry:

  title={Classy Vision},
  author={{Adcock}, A. and {Reis}, V. and {Singh}, M. and {Yan}, Z. and {van der Maaten} L., and {Zhang}, K. and {Motwani}, S. and {Guerin}, J. and {Goyal}, N. and {Misra}, I. and {Gustafson}, L. and {Changhan}, C. and {Goyal}, P.},
  howpublished = {\url{}},
You can’t perform that action at this time.