parse TW congress logs
LiveScript PHP Python Other
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Failed to load latest commit information.
unoconv @ dda5af0


Build Status gemnasium

WARNING: this is work in progress and the file format is likely to change!

Prepare environment

The files with .ls extension is a LiveScript source file. LiveScript is a language which compiles to JavaScript.

For emacs user, please use for syntax highlight.

To install node.js and npm and LiveScript in Ubunutu

The node.js in Ubuntu is pretty old and does not work with LiveScript. Please use the one in chris ppa.

$ sudo add-apt-repository ppa:chris-lea/node.js
$ sudo apt-get update
$ sudo apt-get install nodejs npm

and some dependency

$ sudo aptitude install libcups libimage-size-perl

install required node.js packages

$ npm i

## compile 
$ npm run prepublish

If You Want to parse legislator information to JSON

# update submodule
$ git submodule init
$ git submodule update
data/twly$ git pull origin master

# generate JSON file, you can input which ad you want, below will use ad=8 for example
$ ./node_modules/.bin/lsc ad 8 > data/mly-8.json

# In begining of ad=9, source didn't provide uid of legislator, we maintain it ourself for temporary usage.
$ ./node_modules/.bin/lsc > data/mly-9.json

Parsing from prepared text version of gazettes:

# get ly-gazette in the same directory with twlyparser
$ cd ..
$ git clone git://

# output/raw/4004.text -> output/raw/
$ cd twlyparser
$ ./node_modules/.bin/lsc ./ --fromtext --gazette 4004 --dir ./output/raw

# generate all gazettes for 8th AD
$ ./node_modules/.bin/lsc ./ --fromtext --ad 8 --dir ./output/raw

Parsing from official source

To retrieve source word files of a specific gazette that is already listed in 'data/index.json':

$ ./node_modules/.bin/lsc --gazette 4004

Convert to html with 'unoconv':

You'll need to install LibreOffice.

# make sure you do `git submodule init` and `git submodule update`
$ ./node_modules/.bin/lsc --force --gazette 4004

To parse:

you may use the sample data to skip get-source and unoconv conversion

twlyrawdata.tgz : download from

$ mkdir source/
$ tar xzvf twlyrawdata.tgz -C source/ 
$ mkdir output

# convert doc files to html and update data/gazettes.json with metadata
$ ./node_modules/.bin/lsc --dometa

# generate text file from source/
$ ./node_modules/.bin/lsc ./ --text --gazette 4004 --dir ./output

# generate markdown file from text generated above
$ ./node_modules/.bin/lsc ./ --fromtext --gazette 4004 --dir ./output

# generate all gazettes for 8th AD
$ ./node_modules/.bin/lsc ./ --text --ad 8 --dir ./output
$ ./node_modules/.bin/lsc ./ --fromtext --ad 8 --dir ./output

To generate json files from md

# generate specific gazette or AD
$ ./node_modules/.bin/lsc ./ --gazette 4004 --dir ./output
$ ./node_modules/.bin/lsc ./ --ad 8 --dir ./output

# generate all gazettes
$ ./node_modules/.bin/lsc ./ --dir ../data

To generate json files of gazettes (only supports interpellation for now)

./node_modules/.bin/lsc --dir ../data

generate CK csv from json

./node_modules/.bin/lsc > mly.csv                 # ./data/mly-8.json
./node_modules/.bin/lsc > gazettes.csv        # ./data/gazettes.json
./node_modules/.bin/lsc --dir ../ly-gazette/raw  # 3110.json 3111.json ...

To bootstrap or maintain the index file cache in data/:

mkdir -p source/meta
sh ./list 4004 > source/meta/4004.html
./node_modules/.bin/lsc ./ source/meta/*.html
./node_modules/.bin/lsc ./

data/index.json should now be populated.

Parse flow

There are some pages of that we can query data. such as

  2. - but you cannot use this link directly for some technical reasons
    1. go to
    2. choose '立法院議事系統'


There is a script to generate bill-diff. But we need the billId of a bill to bootstrap the script. The billId could be found in misq page. use 1010509070300300 as a billId for example

./node_modules/.bin/lsc 1010509070300300

could helps us to generate bill-diff. It might fiailed in first run, just execute it twice.


We can parse motion data from as well. First of all, install chrome extension from g0v/ly-crx, then

1. open (in correct way)
1. query motion
1. You will see 'download all' in the page of query result, click the button
1. The browser will open a new page, save the whole content of opened page to /tmp/foo.html

./node_modules/.bin/lsc /tmp/foo.html > foo.json



to parse ly-law-record and ly-statistics

update_record(json_path, output_path)

All path without specificate (e.g. to_path, output_path), don't put on file type

  • init_record('../ly-record/record')

  • update_record('../ly-record/record.json', '../ly-record/record')

All function will generate csv and json file


  • get-calendar-by-year(year, seen) => entries

    Crawl calendar from



  1. Stub response and save to .yml

    Test architecture stub

  2. Replay fake response and compare

    Test architecture replay


  1. Run npm run shot:something.

    Shot both cassettes and snapshots by using network

    Test workflow shot1

    or shot only snapshots by using the cassettes

    Test workflow shot2

  2. Run npm run test:something.

    Test workflow spec

Shot fixtures for test

  • Calendar

    $ npm run shot:calendar

If you don't has any cassettes under test/fixtures/cassettes/something/*.yml, it will shot both cassettes and snapshots by using network.

Crawl calendar by using network

If you already had, it will shot only snapshots by using the cassettes: test/fixtures/cassettes/something/*.yml.

Crawl Calendar by using cassettes

Run test

  • Run all test

    $ npm run test
  • Run specific test

    • Calendar

      $ npm run test:calendar

      Test Calendar

CC0 1.0 Universal

To the extent possible under law, Chia-liang Kao has waived all copyright and related or neighboring rights to twlyparser.

This work is published from Taiwan.