UPDATE: On 8th Mar 2018, we have updated the code to support Python 3 (with
futurize). If you find any problem, please let us know. Thanks.
This repository contains a Python 2.7/3 implementation of the nonparametric linear-time goodness-of-fit test described in our paper
A Linear-Time Kernel Goodness-of-Fit Test Wittawat Jitkrittum, Wenkai Xu, Zoltan Szabo, Kenji Fukumizu, Arthur Gretton NIPS 2017 (Best paper) https://arxiv.org/abs/1705.07673
How to install?
The package can be installed with the
pip install git+https://github.com/wittawatj/kernel-gof.git
Once installed, you should be able to do
import kgof without any error.
pip will also resolve the following dependency automatically.
The following Python packages were used during development. Ideally, the following packages with the specified version numbers or newer should be used. However, older versions may work as well. We did not specifically rely on newest features in these specified versions.
autograd == 1.1.7 matplotlib == 2.0.0 numpy == 1.11.3 scipy == 0.19.0
To get started, check
This is a Jupyter notebook which will guide you through from the beginning. It
can also be viewed on the web. There are many Jupyter notebooks in
folder demonstrating other implemented tests. Be sure to check them if you
would like to explore.
Reproduce experimental results
Each experiment is defined in its own Python file with a name starting with
XX is a number. All the experiment files are in
folder. Each file is runnable with a command line argument. For example in
ex1_vary_n.py, we aim to check the test power of each testing algorithm
as a function of the sample size
n. The script
ex1_vary_n.py takes a
dataset name as its argument. See
run_ex1.sh which is a standalone Bash
script on how to execute
We used independent-jobs
package to parallelize our experiments over a
Slurm cluster (the package is not needed if you
just need to use our developed tests). For example, for
ex1_vary_n.py, a job is created for each combination of
(dataset, test algorithm, n, trial)
If you do not use Slurm, you can change the line
engine = SlurmComputationEngine(batch_parameters)
engine = SerialComputationEngine()
which will instruct the computation engine to just use a normal for-loop on a
single machine (will take a lot of time). Other computation engines that you
use might be supported. See independent-jobs's repository
page. Running simulation will
create a lot of result files (one for each tuple above) saved as Pickle. Also,
independent-jobs package requires a scratch folder to save temporary
files for communication among computing nodes. Path to the folder containing
the saved results can be specified in
kgof/config.py by changing the value of
# Full path to the directory to store experimental results. 'expr_results_path': '/full/path/to/where/you/want/to/save/results/',
The scratch folder needed by the
independent-jobs package can be specified in
the same file by changing the value of
# Full path to the directory to store temporary files when running experiments 'scratch_path': '/full/path/to/a/temporary/folder/',
To plot the results, see the experiment's corresponding Jupyter notebook in the
ipynb/ folder. For example, for
to plot the results.
When adding a new
np.dot(X, Y)instead of
autogradcannot differentiate the latter. Also, do not use
x += .... Use
x = x + ..instead.
If you have questions or comments about anything related to this work, please do not hesitate to contact Wittawat Jitkrittum.