You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Oct 3, 2018. It is now read-only.
The reason these fields aren't just baked into the datapoint by the test to begin with is the silly string de-duplication stuff. We don't want to store each reporter name iterations * checkpoints times per run in sqlite. A database with proper transparent compression would solve this, so we could just prefix whatever onto the reporter name and it would work.
For the record, if you change the JSON format, the procedure to re-export the old databases is roughly:
# Run this over a weekend, it wont be quick, these DBs are huge and sqlite slow
for compressedMonthDB in db/areweslimyet-*.sqlite.xz; do
monthDB="${compressedMonthDB%.xz}"
seriesName="$(basename "${monthDB%.sqlite}")"
monthJSON="html/data/$seriesName.json.gz"
util/unarchive_db.sh "$compressedMonthDB"
util/update_db.py "$monthDB"
mv -v "$monthJSON" "$monthJSON.oldFormat"
./create_graph_json.py "$monthDB" "$seriesName" html/data/
util/archive_db.sh "$monthDB"
done
(do this one at a time with a script because having them all uncompressed at once will fill the disk)
As of ae88a34create_graph_json.py handles the new DB format but doesn't actually use anything but the main process. The next step is to update the script to use the process names (and annotate kinds).
Once process names are added to the DB (#77) we should update our scripts to actually use them.
The text was updated successfully, but these errors were encountered: