Making Data, the DataMade Way
This guide is part of a body of technical and process documentation maintained by DataMade. Head over to
datamade/how-to for other guides on topics ranging from AWS to work practices!
What is ETL?
ETL refers to the general process of:
- taking raw source data ("Extract")
- doing some stuff to get the data in shape, possibly involving intermediate derived files ("Transform")
- producing final output in a more usable form (for "Loading" into something that consumes the data - be it an app, a system, a visualization, etc.)
Having a standard ETL workflow helps us make sure that our work is clean, consistent, and easy to reproduce. By following these guidelines you'll be able to keep your work up to date and share it with the world in a standard format - all with as few headaches as possible.
These five principles inform all of our data work:
- Never destroy data - treat source data as immutable, and show your work when you modify it
- Be able to deterministically produce the final data with one command
- Write as little custom code as possible
- Use standard tools whenever possible
- Keep source data under version control
Unsure how to follow these principles? Read on!
- Make & Makefile Overview
- ETL Styleguide
- Some Annotated ETL Code Examples with Make
- Recipes for Common Makefile Operations
- Chicago Lead - data work with a clear README and Makefile
- EITC Works - adding data attributes to Illinois House and Senate district shapefiles and outputting at GeoJSON