Introduction to Data Engineering workshop, learn to build a data pipeline with Luigi!
Switch branches/tags
Nothing to show
Clone or download
Type Name Latest commit message Commit time
Failed to load latest commit information.
images added more resources Mar 13, 2015
text init Mar 10, 2015
topmodel added topmodel Jul 24, 2015
.gitignore Remove hard-coded path Mar 11, 2015
LICENSE added more resources Mar 13, 2015 Update Dec 5, 2016 app server Jul 24, 2015 init Mar 10, 2015 added exercise Jul 24, 2015 app server Jul 24, 2015 typos Jul 24, 2015 Create Oct 24, 2015
presentation.pdf new preso Aug 9, 2015
requirements.txt added more resources Mar 13, 2015

Data Engineering 101: Building a Data Pipeline

This repository contains the files and data from the workshop as well as resources around Data Engineering. For the workshop (and after) we will use a Gitter chatroom to keep the conversation going:

And/or please do not hesitate to reach out to me directly via email at or over twitter @clearspandex

The presentation can be found on Slideshare here or in this repository (presentation.pdf). Video can be found here.

Throughout this workshop, you will learn how to make a scalable and sustainable data pipeline in Python with Luigi

Learning Objectives

  • Run a simple 1 stage Luigi flow reading/writing to local files
  • Write a Luigi flow containing stages with multiple dependencies
    • Visualize the progress of the flow using the centralized scheduler
    • Parameterize the flow from the command line
    • Output parameter specific output files
  • Manage serialization to/from a Postgres database
  • Integrate a Hadoop Map/Reduce task into an existing flow
  • Parallelize non-dependent stages of a multi-stage Luigi flow
  • Schedule a local Luigi job to run once every day
  • Run any arbitrary shell command in a repeatable way


Prior experience with Python and the scientific Python stack is beneficial. The workshop will focus on using the Luigi framework, but will have code from the following lobraries as well:

  • numpy
  • scikit-learn
  • Flask

Run the Code


  1. Install libraries and dependencies: pip install -r requirements.txt
  2. Start the UI server: luigid --background --logdir logs
  3. Navigate with a web browser to http://localhost:[port] where [port] is the port the luigid server has started on (luigid defaults to port 8082)
  4. start the API Server: python
  5. Evaluate Model: python EvaluateModel --input-dir text --lam 0.8
  6. Run evaluation server (at localhost:9191): topmodel/
  7. Run the final pipeline: python BuildModels --input-dir text --num-topics 10 --lam 0.8


For parallelism, set --workers (note this is Task parallelism):

python BuildModels --input-dir text --num-topics 10 --lam 0.8 --workers 4


  1. Start Hadoop cluster: bin/; sbin/
  2. Setup Directory Structure: hadoop fs -mkdir /tmp/text
  3. Get files on cluster: hadoop fs -put ./data/text /tmp/text
  4. Retrieve results: hadoop fs -getmerge /tmp/text-count/2012-06-01 ./counts.txt
  5. View results: head ./counts.txt


  1. docker run -it -v /LOCAL/PATH/TO/REPO/data-engineering-101:/root/workshop clearspandex/pydata-seattle bash
  2. pip2 install flask
  3. ipython2

Libraries Used

Whats in here?

text/                   20newsgroups text files
topmodel/               Stripe's topmodel evaluation library        example scaffold of a luigi pipeline    example luigi pipeline using Hadoop          luigi pipeline covered in workshop                  Flask server to deploy a scikit-learn model
LICENSE                 Details of rights of use and distribution
presentation.pdf        lecture slides from presentation               this file!

The Data

The data (in the text/ folder) is from the 20 newsgroups dataset, a standard benchmarking dataset for machine learning and NLP. Each file in text corresponds to a single 'document' (or post) from one of two selected newsgroups ( or alt.atheism). The first line provides which group the document is from and everything thereafter is the body of the post.
I'm looking for a better method to back up files.  Currently using a MaynStream
250Q that uses DC 6250 tapes.  I will need to have a capacity of 600 Mb to 1Gb
for future backups.  Only DOS files.

I would be VERY appreciative of information about backup devices or
manufacturers of these products.  Flopticals, DAT, tape, anything.  
If possible, please include price, backup speed, manufacturer (phone #?), 
and opinions about the quality/reliability.

Please E-Mail, I'll send summaries to those interested.

Thanx in advance,



Copyright 2015 Jonathan Dinu.

All files and content licensed under Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International Public License