An aggregate look at the code being written in the journalism world.
We collect data from Github in order to aggregate it and view it in different ways, as well as for performance. The scraper is aimed at being run on ScraperWiki due to its awesomeness and the fact that we already have an account.
pip install -r requirements.txtexport GITHUB_TOKEN=XXXXXXpython collecting/collecting.py
It's HTML and stuff; no server-side bits. A build process should be put in place to make front-end assets speedier.
It would be so cool to have an RSS feed of projects. This would require a server, maybe Heroku.
MinnData, the MinnPost data team, is Alan, Tom, and Kaeti and all the awesome contributors to open source projects we utilize. See our work at minnpost.com/data.
.--.
`. \
\ \
. \
: .
| .
| :
| |
..._ ___ | |
`."".`''''""--..___ | |
,-\ \ ""-...__ _____________/ |
/ ` " ' `"""""""" .
\ L
(> \
/ \
\_ ___..---. L
`--' '. \
. \_
_/`. `.._
.' -. `.
/ __.-Y /''''''-...___,...--------.._ |
/ _." | / ' . \ '---..._ |
/ / / / _,. ' ,/ | |
\_,' _.' / /'' _,-' _| |
' / `-----'' / |
`...-' `...-'