Created as a Multimedia Scholarship honors thesis for the USC School of Cinematic Arts. Lives at http://transitlens.org
Data scraping and prep
These are the (messy) Python scripts I used to gather the data from Trimet and organize it for use in transitlens.org
I had stuff saved in multiple locations since I was using a mixture of Dropbox, local files, and some stuff on the IML server all together. I haven't done anything to fix the file paths so if you were to use this for anything you'd probably have to change all those things.
Using these things to get data
You'll need a Trimet API key, from http://developer.trimet.org/. It goes on line 26 of
- Set up all the file paths and time/date settings across all the scripts. Some of them cleverly implement constants at the top, some of them lazily implement in-line times. Things will break if they don't match.
MAIN LOGIC FOR MOST OF THE TIMEactive and
TRY TO FILL IN MISSINGcommented out. Just let 'er roll for as long as it takes.
- There will inevitably be some failures, so run
missingCheckerto generate a list of those.
- Swap which halves of
mainAPIscraperare commented, and run it again.
- Repeat 3 & 4 as necessary.
- Run CSV Compiler over it.
Then you need other tools
So at this point I used geojson.io and QGIS to burn that CSV usably to geodata. Here's how I did it, in my own words (my at-the-time note to myself so I wouldn't forget):
- csv durations-3 to geojson with geojson.io
- export shapefile
- qgis spatial join to trimmed polygons
- convert to geojson with mygeodata.eu
- add var and save a .js
- use jsonTitleChanger to clean & rename with "-i"