pipeline library
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Failed to load latest commit information.
.github
bin
docs
drain
tests
.editorconfig
.gitignore
.travis.yml
AUTHORS.rst
CONTRIBUTING.rst
HISTORY.rst
LICENSE
MANIFEST.in
Makefile
README.rst
requirements.txt
requirements_dev.txt
setup.cfg
setup.py
tox.ini
travis_pypi_setup.py

README.rst

drain

image0 image1 image2 image4

Drain is a lightweight framework for writing reproducible data science workflows in Python. The core features are:

  • Turn a Python workflow (DAG) into steps that can be run by a tool like make.
  • Transparently pass the results of one step as the input to another, handling any caching that the user requests using efficient tools like HDF and joblib.
  • Enable easy parallel execution of workflows.
  • Execute only those steps that are determined to be necessary based on timestamps (both source code and data) and dependencies, virtually guaranteeing reproducibility of results and efficient development.

Drain is designed around these principles:

  • Simplicity: drain is very lightweight and easy to use. The core is just a few hundred lines of code. The steps you write in drain get executed with minimal overhead, making drain workflows easy to debug and manage.
  • Reusability: Drain leverages mature tools drake to execute workflows. Drain provides a library of steps for data science workflows including feature generation and selection, model fitting and comparison.
  • Generality: Virtually any workflow can be realized in drain. The core was written with extensibility in mind so new storage backends and job schedulers, for example, will be easy to incorporate.