Skip to content
DataFlows is a simple, intuitive lightweight framework for building data processing flows in python.
Branch: master
Clone or download
Type Name Latest commit message Commit time
Failed to load latest commit information.
data Add support for parsing XML files (#51) Nov 18, 2018
dataflows v0.0.44 event better error logs for join Mar 9, 2019
tests v0.0.41 add the set_primary_key processor Mar 8, 2019
.gitignore 0.0.14 Make sure 'resources' is always present, even if empty Oct 8, 2018
.travis.yml Jupyter support, Python 3.7, upgraded dependencies, Tutorial notebook ( Oct 16, 2018
LICENSE Adding MIT License May 30, 2018
Makefile [readme][s]: minor typos plus add at top about being a simple pattern. Feb 13, 2019
TUTORIAL.ipynb Add strip parameter to load (#46) Oct 29, 2018
logo-s.png Fix logo Jul 8, 2018
pytest.ini Fix coverage Jun 8, 2018
setup.cfg initial Jun 8, 2018 Add support for parsing XML files (#51) Nov 18, 2018

logo DataFlows

Travis Coveralls PyPI - Python Version Gitter chat

DataFlows is a simple and intuitive way of building data processing flows.

  • It's built for small-to-medium-data processing - data that fits on your hard drive, but is too big to load in Excel or as-is into Python, and not big enough to require spinning up a Hadoop cluster...
  • It's built upon the foundation of the Frictionless Data project - which means that all data produced by these flows is easily reusable by others.
  • It's a pattern not a heavy-weight framework: if you already have a bunch of download and extract scripts this will be a natural fit

Read more in the [Features section below][#features].


Install dataflows via pip install.

Then use the command-line interface to bootstrap a basic processing script for any remote data file:

# Install from PyPi
$ pip install dataflows

# Inspect a remote CSV file
$ dataflows init
Writing processing code into
#     Year           Ceremony  Award                                 Winner  Name                            Film
      (string)      (integer)  (string)                            (string)  (string)                        (string)
----  ----------  -----------  --------------------------------  ----------  ------------------------------  -------------------
1     1927/1928             1  Actor                                         Richard Barthelmess             The Noose
2     1927/1928             1  Actor                                      1  Emil Jannings                   The Last Command
3     1927/1928             1  Actress                                       Louise Dresser                  A Ship Comes In
4     1927/1928             1  Actress                                    1  Janet Gaynor                    7th Heaven
5     1927/1928             1  Actress                                       Gloria Swanson                  Sadie Thompson
6     1927/1928             1  Art Direction                                 Rochus Gliese                   Sunrise
7     1927/1928             1  Art Direction                              1  William Cameron Menzies         The Dove; Tempest

# dataflows create a local package of the data and a reusable processing script which you can tinker with
$ tree
├── academy_csv
│   ├── academy.csv
│   └── datapackage.json

1 directory, 3 files

# Resulting 'Data Package' is super easy to use in Python
[adam] ~/code/budgetkey-apps/budgetkey-app-main-page/tmp (master=) $ python
Python 3.6.1 (default, Mar 27 2017, 00:25:54)
[GCC 4.2.1 Compatible Apple LLVM 8.0.0 (clang-800.0.42.1)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> from datapackage import Package
>>> pkg = Package('academy_csv/datapackage.json')
>>> it = pkg.resources[0].iter(keyed=True)
>>> next(it)
{'Year': '1927/1928', 'Ceremony': 1, 'Award': 'Actor', 'Winner': None, 'Name': 'Richard Barthelmess', 'Film': 'The Noose'}
>>> next(it)
{'Year': '1927/1928', 'Ceremony': 1, 'Award': 'Actor', 'Winner': '1', 'Name': 'Emil Jannings', 'Film': 'The Last Command'}

# You now run `` to repeat the process
# And obviously modify it to add data modification steps


  • Trivial to get started and easy to scale up
  • Set up and run from command line in seconds ...
    • dataflow init =>
    • python
  • Validate input (and esp source) quickly (non-zero length, right structure, etc.)
  • Supports caching data from source and even between steps
    • so that we can run and test quickly (retrieving is slow)
  • Immediate test is run: and look at output ...
    • Log, debug, rerun
  • Degrades to simple python
  • Conventions over configuration
  • Log exceptions and / or terminate
  • The input to each stage is a Data Package or Data Resource (not a previous task)
    • Data package based and compatible
  • Processors can be a function (or a class) processing row-by-row, resource-by-resource or a full package
  • A pre-existing decent contrib library of Readers (Collectors) and Processors and Writers

Learn more

Dive into the Tutorial to get a deeper glimpse into everything that dataflows can do. Also review this list of Built-in Processors, which also includes an API reference for each one of them.

You can’t perform that action at this time.