Skip to content


Subversion checkout URL

You can clone with
Download ZIP
IATI Data Quality measurement
Python HTML JavaScript CSS

Merge pull request #446 from andylolz/445-upgrade-sqlalchemy

Upgrade & pin SQLAlchemy and SQLAlchemy-Utils
latest commit 8ee80767a9
@markbrough markbrough authored
Failed to load latest commit information.
bin get types right
codelists temporarily add codelists
doc note which templates seem to be in use
iatidataquality Adjust inaccurate text
iatidq Make it possible to add an organisation via the UI, fixes #444
sample_work fix terrible missing logic test
tests Reorganise things, remove unnecessary files
unittests Rename 2013 Index / index to 2014index
.gitignore Add mac stuff to gitignore
.travis.yml Try and get travis to stop complaining document sample poisoning cmd line add further note Adjust config tmpl to IATI Dashboard frequency URL get backend largely working update seldom-used scripts allow one-off download queue flush
dq.wsgi Added shebang to dq
license.txt Added license itself to repository cope with INADDR_ANY (should be config) factor which_packages back together
requirements.txt Upgrade & pin SQLAlchemy and SQLAlchemy-Utils update seldom-used scripts add note re schema
supervisord.conf.tmpl make flask app restartable/killable cleanly fix thinko allow one-off test runs


IATI Data Quality measurement tool

License: AGPL v3.0

Copyright (C) 2012

This program is free software: you can redistribute it and/or modify
it under the terms of the GNU Affero General Public License as
published by the Free Software Foundation, either version 3 of the
License, or (at your option) any later version.

This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
GNU Affero General Public License for more details.

You should have received a copy of the GNU Affero General Public License
along with this program.  If not, see <>.


You need RabbitMQ running for the tests processing to work. It's probably sensible to configure that before starting.

Set up a virtualenv:

virtualenv ./pyenv

Activate the virtualenv:

source ./pyenv/bin/activate

Install the requirements:

pip install -r requirements.txt

Copy and edit the


Run the setup script to populate the database (append --minimal if you want to try it out with just a few packages):

python --setup

This will also create a default username and password; please create a new user and then delete the default one!

Run the server:

python runserver

To get the download data and the tests running, run the backends (more details below):


You can also use supervisor:

  1. Rename the provided supervisord.conf.tmpl file to supervisord.conf, and ensure it matches your paths (especially the path to your virtualenv)
  2. Run supervisord -n (Remove -n if you don't want to see the output, but it's probably useful for testing)

Choose packages for activation

From the web interface, log in and from the top-right drop-down menu, click manage packages. Select the packages you want to activate and click activate packages. Then, you can click the drop-down menu again and choose run tests.

Remember, you need the download and tests backends working for this, which you could do directly with ./ or through supervisor.

Survey component

The survey component currently requires the existence of three files (could be abstracted in future). Move them from the tests directory to the DATA_STORAGE_DIR you specified in E.g., if you set the directory to be /home/me/data/:

cp tests/2012_2013_organisation_mapping.csv /home/me/data/
cp tests/2012_indicators.csv /home/me/data/
cp tests/2012_results.csv /home/me/data/


python --drop-db
python --init-db
python --setup --minimal
python --refresh --minimal
bin/dqtool --mode=reload-packages --organisation=GB-1

Run unit tests


Run aggregation test

bin/dqtool --mode compare-aggregation --organisation GB-1 --filename unittests/artefacts/json/dfid-sample-aggregation-data.json

This runs an aggregation on the packages for organisation GB-1 and compares the results with the stashed file in unittests/artefacts/json/dfid-sample-aggregation-data.json; if the results are different, then a new file is output

Reload a package

bin/dqtool --mode reload-package --name dfid-tz

Adding new tests

./ --enroll-tests --filename tests/some-new-file.csv

You will then need to associate each test with an indicator:

bin/dqtool --mode associate-test --test-id 52 --indicator conditions

Checking if tests are complete

bin/dqtool --mode check-package-results --all --organisation GB-1

Updating sampling poisoning

bin/dqtool --mode update-sampling
Something went wrong with that request. Please try again.