Skip to content

HTTPS clone URL

Subversion checkout URL

You can clone with
or
.
Download ZIP
The scraper, parser, and database creation scripts for Financial Management Service daily U.S. Treasury statements.
Python JavaScript Shell
branch: master
Failed to load latest commit information.
documentation Updated definition table
http
parser changin log msgs
schema-builder removed duplicated fixies, reset database
tests
twitter updated tweetbot to current db schema
utils fixed reset_data.sh script to NOT DELETE ALL THE DATA DUH
.gitignore db_schema.json is being inserted into http/ file now as a part of run.sh
box.json box.json
readme.md major updates to readme
requirements.pip don't need s3cmd
run.sh
scraperwiki.sqlite removed all references to data/fms.db and changed them data/treasury_…
treasury_data.db added bitly config file on box, updated gitignote
tweet.sh removed crontab installing from script, bad idea for anyone who is no…

readme.md

Federal Treasury API

   ‹
   €     |    |     €€€€≤±‹€€€€≤±‹
   €    .| -- | -+
   €   ' |    |  |  €€€€≤±±≤€€€€€€€€€€€€≤±±≤€€€€€≤±
   €    `|    |
   €     |`.  |     €€€€≤±±≤€€€€€€€€€€€€≤±±≤€€€€€≤±
   €     |  `.|
   €     |    |`.   €€€€≤±±≤€€€€€€€€€€€€≤±±≤€€€€€≤±
   € +   |    |   '
   € | _ | __ | _'   €€€≤±±≤€€€€€€€€€€€€≤±±≤€€€€€≤±
   €     |    |
   €€€€€€€€€€€€€€€€€€€€€€≤±±≤€€€€€€€€€€€€≤±±≤€€€€€≤±
   €
   €€€€€€€€€€€€€€€€€€€€€€≤±±≤€€€€€€€€€€€€≤±±≤€€€€€≤±
   €
   €                      fl±≤€€€€€€€€€€€€≤±±≤€€€€€≤±
   €                                      fl±≤€€€€€≤±
   €
   €

About this branch

The master branch of federal-treasury-api contains code specific to running our project on ScraperWiki. the just-the-api branch contains only the code needed to download the data locally and launch a queryable api. In other words, if you're just looking to get and host the database, go here.

About the API

federal-treasury-api is the first-ever electronically-searchable database of the Federal government's daily cash spending and borrowing. It updates daily and the data can be exported in various formats and loaded into various systems.

About the data

There are eight tables.

  • I. Operating Cash Balance (t1)
  • II. Deposits and Withdrawals (t2)
  • IIIa. Public Debt Transactions (t3a)
  • IIIb. Adjustment of Public Dept Transactions to Cash Basis (t3b)
  • IIIc. Debt Subject to Limit (t3c)
  • IV. Federal Tax Deposits (t4)
  • V. Short-term Cash Investments (t5)
  • VI. Incom Tax Refunds Issued (t6)

Check out thre comprehensive data dictionary and treasury.io for more information.

Obtaining the data

Optionally set up a virtualenv. (You need this on ScraperWiki.) Run this from the root of the current repository.

virtualenv env

Install dependencies.

pip install -r requirements.pip

Enable the git post-merge hook.

cd .git/hooks
ln -s ../../utils/post-merge .

POSIX

This one command downloads the (new) fixies and converts them to an SQLite3 database.

./run.sh

Windows

Run everything

cd parser
python download_and_parse_fms_fixies.py

Testing the data

Various tests are contained in tests

Tests are run everyday with ./run.sh and the results are emailed to csvsoundsystem@gmail.com

Cron

Run everything each day around 4:30 - right after the data has been released.

30 16 * * * cd path/to/federal-treasury-api && ./run.sh

Optional: set up logging

30 16 * * * cd path/to/federal-treasury-api && ./run.sh >> run.log 2>> err.log

Deploying to ScraperWiki

You can run this on any number of servers, but we happen to be using ScraperWiki. You can check out their documentation here

SSH

To use ScraperWiki, log in here, make a project, click the "SSH in" link, add your SSH key and SSH in. Then you can SSH to the box like so.

ssh cc7znvq@premium.scraperwiki.com

Or add this to your ~/.ssh/config

Host fms
HostName premium.scraperwiki.com
User cc7znvq

and just run

ssh fms

What this ScraperWiki account is

Some notes about how ScraperWiki works:

  • We have a user account in a chroot jail.
  • We don't have root, so we install Python packages in a virtualenv.
  • Files in /home/http get served on the web.
  • The database /home/scraperwiki.sqlite gets served from the SQLite web API.
    • NOTE: the home/scraperwiki.sqlite is simply a symbolic link to data/treasury_data.db generated by this command: ln -s data/treasury_data.db scraperwiki.sqlite

The directions below still apply for any other service, of course.

Something went wrong with that request. Please try again.