Skip to content
Reproducible and fast DL & RL
Python Shell Other
Branch: master
Clone or download
Latest commit c5e7d63 Nov 18, 2019
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
.github update links to the contribution guides (#456) Oct 17, 2019
bin added trace methon for AMP FP16 (#497) Nov 18, 2019
catalyst Update __version__.py Nov 18, 2019
docker Update Makefile and README (#501) Nov 6, 2019
docs Update Makefile and README (#501) Nov 6, 2019
examples Dl/tutorials sync (#523) Nov 18, 2019
pics
requirements wandb to contrib (#515) Nov 15, 2019
.gitignore update for new docs (#475) Oct 28, 2019
.pre-commit-config.yaml Code formatting (#477) Oct 31, 2019
.travis.yml imread fix (#514) Nov 15, 2019
CODE_OF_CONDUCT.md community and contributors stuff (#422) Oct 10, 2019
CONTRIBUTING.md Code formatting (#477) Oct 31, 2019
LICENSE Update LICENSE Nov 7, 2019
Makefile added trace methon for AMP FP16 (#497) Nov 18, 2019
README.md wandb to contrib (#515) Nov 15, 2019
setup.cfg added trace methon for AMP FP16 (#497) Nov 18, 2019
setup.py wandb to contrib (#515) Nov 15, 2019

README.md

Catalyst logo

Reproducible and fast DL & RL

Pipi version Docs PyPI Status Github contributors License

Build Status Telegram Gitter Slack Donate

High-level utils for PyTorch DL & RL research. It was developed with a focus on reproducibility, fast experimentation and code/ideas reusing. Being able to research/develop something new, rather than write another regular train loop.

Break the cycle - use the Catalyst!


Installation

Common installation:

pip install -U catalyst

More specific with additional requirements:

pip install catalyst[rl] # installs DL+RL based catalyst
pip install catalyst[contrib] # installs DL+contrib based catalyst
pip install catalyst[all] # installs everything. Very convenient to deploy on a new server

Catalyst is compatible with: Python 3.6+. PyTorch 1.0.0+.

Docs and examples

API documentation and an overview of the library can be found here Docs.

In the examples folder of the repository, you can find advanced tutorials and Catalyst best practices.

Blog

To learn more about Catalyst internals and to be aware of the most important features, you can read Catalyst-info, our blog where we regularly write facts about the framework.

Awesome list of Catalyst-powered repositories

We supervise the Awesome Catalyst list. You can make a PR with your project to the list.

Releases

We release a major release once a month with a name like YY.MM. And micro-releases with hotfixes and framework improvements in the format YY.MM.#.

You can view the changelog on the GitHub Releases page.

Current version: Pipi version

Overview

Catalyst helps you write compact but full-featured DL & RL pipelines in a few lines of code. You get a training loop with metrics, early-stopping, model checkpointing and other features without the boilerplate.

Features

  • Universal train/inference loop.
  • Configuration files for model/data hyperparameters.
  • Reproducibility – all source code and environment variables will be saved.
  • Callbacks – reusable train/inference pipeline parts.
  • Training stages support.
  • Easy customization.
  • PyTorch best practices (SWA, AdamW, Ranger optimizer, OneCycleLRWithWarmup, FP16 and more).

Structure

  • DL – runner for training and inference, all of the classic machine learning and computer vision metrics and a variety of callbacks for training, validation and inference of neural networks.
  • RL – scalable Reinforcement Learning, on-policy & off-policy algorithms and their improvements with distributed training support.
  • contrib - additional modules contributed by Catalyst users.
  • data - useful tools and scripts for data processing.

Getting started: 30 seconds with Catalyst

import torch
from catalyst.dl import SupervisedRunner

# experiment setup
logdir = "./logdir"
num_epochs = 42

# data
loaders = {"train": ..., "valid": ...}

# model, criterion, optimizer
model = Net()
criterion = torch.nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters())
scheduler = torch.optim.lr_scheduler.ReduceLROnPlateau(optimizer)

# model runner
runner = SupervisedRunner()

# model training
runner.train(
    model=model,
    criterion=criterion,
    optimizer=optimizer,
    scheduler=scheduler,
    loaders=loaders,
    logdir=logdir,
    num_epochs=num_epochs,
    verbose=True,
)

For Catalyst.RL introduction, please follow OpenAI Gym example.

Docker

Catalyst has its own DockerHub page:

  • catalystteam/catalyst:{CATALYST_VERSION} – simple image with Catalyst
  • catalystteam/catalyst:{CATALYST_VERSION}-fp16 – Catalyst with FP16
  • catalystteam/catalyst:{CATALYST_VERSION}-dev – Catalyst for development with all the requirements
  • catalystteam/catalyst:{CATALYST_VERSION}-dev-fp16 – Catalyst for development with FP16

Docker Pulls

To build a docker from the sources and get more information and examples, please visit docker folder.

Contribution guide

We appreciate all contributions. If you are planning to contribute back bug-fixes, please do so without any further discussion. If you plan to contribute new features, utility functions or extensions, please first open an issue and discuss the feature with us.

Donate

License

This project is licensed under the Apache License, Version 2.0 see the LICENSE file for details License

Citation

Please use this bibtex if you want to cite this repository in your publications:

@misc{catalyst,
    author = {Kolesnikov, Sergey},
    title = {Reproducible and fast DL & RL.},
    year = {2018},
    publisher = {GitHub},
    journal = {GitHub repository},
    howpublished = {\url{https://github.com/catalyst-team/catalyst}},
}
You can’t perform that action at this time.