From 13721c3105dc37c698e4610c84b628f51fdf2ee7 Mon Sep 17 00:00:00 2001 From: Heiko Strathmann Date: Mon, 4 Jul 2016 19:38:31 +0100 Subject: [PATCH] some work towards new interface --- .../mmd_two_sample_testing.ipynb | 2158 ++++++++--------- 1 file changed, 1069 insertions(+), 1089 deletions(-) diff --git a/doc/ipython-notebooks/statistical_testing/mmd_two_sample_testing.ipynb b/doc/ipython-notebooks/statistical_testing/mmd_two_sample_testing.ipynb index 5a68c03807a..7480f324d6f 100644 --- a/doc/ipython-notebooks/statistical_testing/mmd_two_sample_testing.ipynb +++ b/doc/ipython-notebooks/statistical_testing/mmd_two_sample_testing.ipynb @@ -1,1093 +1,1073 @@ { + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Kernel hypothesis testing in Shogun" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Heiko Strathmann - heiko.strathmann@gmail.com - github.com/karlnapf - herrstrathmann.de\n", + "#### Rahul De - heavensdevil6909@gmail.com - github.com/lambday" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "This notebook describes Shogun's framework for statistical hypothesis testing. We begin by giving a brief outline of the problem setting and then describe various implemented algorithms.\n", + "All algorithms discussed here are instances of kernel two-sample testing with the *maximum mean discrepancy*, and are based on embedding probability distributions into Reproducing Kernel Hilbert Spaces (RKHS)." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "There are two types of tests available, a quadratic time test and a linear time test. Both come in various flavours." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "import matplotlib.pyplot as plt\n", + "%matplotlib inline\n", + "import modshogun as sg\n", + "import numpy as np" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Some Formal Basics (skip if you just want code examples)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "To set the context, we here briefly describe statistical hypothesis testing. Informally, one defines a hypothesis on a certain domain and then uses a statistical test to check whether this hypothesis is true. Formally, the goal is to reject a so-called *null-hypothesis* $H_0:p=q$, which is the complement of an *alternative-hypothesis* $H_A$. \n", + "\n", + "To distinguish the hypotheses, a test statistic is computed on sample data. Since sample data is finite, this corresponds to sampling the true distribution of the test statistic. There are two different distributions of the test statistic -- one for each hypothesis. The *null-distribution* corresponds to test statistic samples under the model that $H_0$ holds; the *alternative-distribution* corresponds to test statistic samples under the model that $H_A$ holds.\n", + "\n", + "In practice, one tries to compute the quantile of the test statistic in the null-distribution. In case the test statistic is in a high quantile, i.e. it is unlikely that the null-distribution has generated the test statistic -- the null-hypothesis $H_0$ is rejected.\n", + "\n", + "There are two different kinds of errors in hypothesis testing:\n", + "\n", + " * A *type I error* is made when $H_0: p=q$ is wrongly rejected. That is, the test says that the samples are from different distributions when they are not.\n", + " * A *type II error* is made when $H_A: p\\neq q$ is wrongly accepted. That is, the test says that the samples are from the same distribution when they are not.\n", + "\n", + "A so-called *consistent* test achieves zero type II error for a fixed type I error, as it sees more data.\n", + "\n", + "To decide whether to reject $H_0$, one could set a threshold, say at the $95\\%$ quantile of the null-distribution, and reject $H_0$ when the test statistic lies below that threshold. This means that the chance that the samples were generated under $H_0$ are $5\\%$. We call this number the *test power* $\\alpha$ (in this case $\\alpha=0.05$). It is an upper bound on the probability for a type I error. An alternative way is simply to compute the quantile of the test statistic in the null-distribution, the so-called *p-value*, and to compare the p-value against a desired test power, say $\\alpha=0.05$, by hand. The advantage of the second method is that one not only gets a binary answer, but also an upper bound on the type I error.\n", + "\n", + "In order to construct a two-sample test, the null-distribution of the test statistic has to be approximated. One way of doing this is called the *permutation test*, where samples from both sources are mixed and permuted repeatedly and the test statistic is computed for every of those configurations. While this method works for every statistical hypothesis test, it might be very costly because the test statistic has to be re-computed many times. Shogun comes with an extremely optimized implementation though. For completeness, Shogun also includes a number of more sohpisticated ways of approximating the null distribution." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Base class for Hypothesis Testing" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Shogun implements statistical testing in the abstract class CHypothesisTest. All implemented methods will work with this interface at their most basic level. We here focos on CTwoSampleTest. This class offers methods to\n", + "\n", + " * compute the implemented test statistic,\n", + " * compute p-values for a given value of the test statistic,\n", + " * compute a test threshold for a given p-value,\n", + " * approximate the null distribution, e.g. perform the permutation test and\n", + " * performing a full two-sample test, and either returning a p-value or a binary rejection decision. This method is most useful in practice. Note that, depending on the used test statistic." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Kernel Two-Sample Testing with the Maximum Mean Discrepancy" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "$\\DeclareMathOperator{\\mmd}{MMD}$\n", + "An important class of hypothesis tests are the *two-sample tests*. \n", + "In two-sample testing, one tries to find out whether two sets of samples come from different distributions. Given two probability distributions $p,q$ on some *arbritary* domains $\\mathcal{X}, \\mathcal{Y}$ respectively, and i.i.d. samples $X=\\{x_i\\}_{i=1}^m\\subseteq \\mathcal{X}\\sim p$ and $Y=\\{y_i\\}_{i=1}^n\\subseteq \\mathcal{Y}\\sim p$, the two sample test distinguishes the hypothesises\n", + "\n", + "\\begin{align*}\n", + "H_0: p=q\\\\\n", + "H_A: p\\neq q\n", + "\\end{align*}\n", + "\n", + "In order to solve this problem, it is desirable to have a criterion than takes a positive unique value if $p\\neq q$, and zero if and only if $p=q$. The so called *Maximum Mean Discrepancy (MMD)*, has this property and allows to distinguish any two probability distributions, if used in a *reproducing kernel Hilbert space (RKHS)*. It is the distance of the mean embeddings $\\mu_p, \\mu_q$ of the distributions $p,q$ in such a RKHS $\\mathcal{F}$ -- which can also be expressed in terms of expectation of kernel functions, i.e.\n", + "\n", + "\\begin{align*}\n", + "\\mmd[\\mathcal{F},p,q]&=||\\mu_p-\\mu_q||_\\mathcal{F}^2\\\\\n", + "&=\\textbf{E}_{x,x'}\\left[ k(x,x')\\right]-\n", + " 2\\textbf{E}_{x,y}\\left[ k(x,y)\\right]\n", + " +\\textbf{E}_{y,y'}\\left[ k(y,y')\\right]\n", + "\\end{align*}\n", + "\n", + "Note that this formulation does not assume any form of the input data, we just need a kernel function whose feature space is a RKHS, see [2, Section 2] for details. This has the consequence that in Shogun, we can do tests on any type of data (CDenseFeatures, CSparseFeatures, CStringFeatures, etc), as long as we or you provide a positive definite kernel function under the interface of CKernel.\n", + "\n", + "We here only describe how to use the MMD for two-sample testing. Shogun offers two types of test statistic based on the MMD, one with quadratic costs both in time and space, and one with linear time and constant space costs. Both come in different versions and with different methods how to approximate the null-distribution in order to construct a two-sample test." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Running Example Data. Gaussian vs. Laplace" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "In order to illustrate kernel two-sample testing with Shogun, we use a couple of toy distributions. The first dataset we consider is the 1D Standard Gaussian\n", + "\n", + "$p(x)=\\frac{1}{\\sqrt{2\\pi\\sigma^2}}\\exp\\left(-\\frac{(x-\\mu)^2}{\\sigma^2}\\right)$\n", + "\n", + "with mean $\\mu$ and variance $\\sigma^2$, which is compared against the 1D Laplace distribution\n", + "\n", + "$p(x)=\\frac{1}{2b}\\exp\\left(-\\frac{|x-\\mu|}{b}\\right)$\n", + "\n", + "with the same mean $\\mu$ and variance $2b^2$. In order to increase difficulty, we set $b=\\sqrt{\\frac{1}{2}}$, which means that $2b^2=\\sigma^2=1$." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "# use scipy for generating samples\n", + "from scipy.stats import laplace, norm\n", + "\n", + "def sample_gaussian_vs_laplace(n=220, mu=0.0, sigma2=1, b=np.sqrt(0.5)): \n", + " # sample from both distributions\n", + " X=norm.rvs(size=n)*np.sqrt(sigma2)+mu\n", + " Y=laplace.rvs(size=n, loc=mu, scale=b)\n", + " \n", + " return X,Y" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "mu=0.0\n", + "sigma2=1\n", + "b=np.sqrt(0.5)\n", + "n=220\n", + "X,Y=sample_gaussian_vs_laplace(n, mu, sigma2, b)\n", + "\n", + "# plot both densities and histograms\n", + "plt.figure(figsize=(18,5))\n", + "plt.suptitle(\"Gaussian vs. Laplace\")\n", + "plt.subplot(121)\n", + "Xs=np.linspace(-2, 2, 500)\n", + "plt.plot(Xs, norm.pdf(Xs, loc=mu, scale=sigma2))\n", + "plt.plot(Xs, laplace.pdf(Xs, loc=mu, scale=b))\n", + "plt.title(\"Densities\")\n", + "plt.xlabel(\"$x$\")\n", + "plt.ylabel(\"$p(x)$\")\n", + "\n", + "plt.subplot(122)\n", + "plt.hist(X, alpha=0.5)\n", + "plt.xlim([-5,5])\n", + "plt.ylim([0,100])\n", + "plt.hist(Y,alpha=0.5)\n", + "plt.xlim([-5,5])\n", + "plt.ylim([0,100])\n", + "plt.legend([\"Gaussian\", \"Laplace\"])\n", + "plt.title('Samples');" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Now how to compare these two sets of samples? Clearly, a t-test would be a bad idea since it basically compares mean and variance of $X$ and $Y$. But we set that to be equal. By chance, the estimates of these statistics might differ, but that is unlikely to be significant. Thus, we have to look at higher order statistics of the samples. In fact, kernel two-sample tests look at *all* (infinitely many) higher order moments." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "print \"Gaussian vs. Laplace\"\n", + "print \"Sample means: %.2f vs %.2f\" % (np.mean(X), np.mean(Y))\n", + "print \"Samples variances: %.2f vs %.2f\" % (np.var(X), np.var(Y))" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Quadratic Time MMD" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "We now describe the quadratic time MMD, as described in [1, Lemma 6], which is implemented in Shogun. All methods in this section are implemented in CQuadraticTimeMMD, which accepts any type of features in Shogun, and use it on the above toy problem.\n", + "\n", + "An unbiased estimate for the MMD expression above can be obtained by estimating expected values with averaging over independent samples\n", + "\n", + "$$\n", + "\\mmd_u[\\mathcal{F},X,Y]^2=\\frac{1}{m(m-1)}\\sum_{i=1}^m\\sum_{j\\neq i}^mk(x_i,x_j) + \\frac{1}{n(n-1)}\\sum_{i=1}^n\\sum_{j\\neq i}^nk(y_i,y_j)-\\frac{2}{mn}\\sum_{i=1}^m\\sum_{j\\neq i}^nk(x_i,y_j)\n", + "$$\n", + "\n", + "A biased estimate would be\n", + "\n", + "$$\n", + "\\mmd_b[\\mathcal{F},X,Y]^2=\\frac{1}{m^2}\\sum_{i=1}^m\\sum_{j=1}^mk(x_i,x_j) + \\frac{1}{n^ 2}\\sum_{i=1}^n\\sum_{j=1}^nk(y_i,y_j)-\\frac{2}{mn}\\sum_{i=1}^m\\sum_{j\\neq i}^nk(x_i,y_j)\n", + ".$$\n", + "\n", + "Computing the test statistic using CQuadraticTimeMMD does exactly this, where it is possible to choose between the two above expressions. Note that some methods for approximating the null-distribution only work with one of both types. Both statistics' computational costs are quadratic both in time and space. Note that the method returns $m\\mmd_b[\\mathcal{F},X,Y]^2$ since null distribution approximations work on $m$ times null distribution. Here is how the test statistic itself is computed." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "# turn data into Shogun representation (columns vectors)\n", + "feat_p=sg.RealFeatures(X.reshape(1,len(X)))\n", + "feat_q=sg.RealFeatures(Y.reshape(1,len(Y)))\n", + "\n", + "# choose kernel for testing. Here: Gaussian\n", + "kernel_width=1\n", + "kernel=sg.GaussianKernel(10, kernel_width)\n", + "\n", + "# create mmd instance of test-statistic\n", + "mmd=sg.QuadraticTimeMMD()\n", + "mmd.set_kernel(kernel)\n", + "mmd.set_p(feat_p)\n", + "mmd.set_q(feat_q)\n", + "\n", + "# compute biased and unbiased test statistic (default is unbiased)\n", + "mmd.set_statistic_type(sg.ST_BIASED_FULL)\n", + "biased_statistic=mmd.compute_statistic()\n", + "\n", + "mmd.set_statistic_type(sg.ST_UNBIASED_FULL)\n", + "statistic=unbiased_statistic=mmd.compute_statistic()\n", + "\n", + "print \"%d x MMD_b[X,Y]^2=%.2f\" % (len(X), biased_statistic)\n", + "print \"%d x MMD_u[X,Y]^2=%.2f\" % (len(X), unbiased_statistic)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Any sub-class of CHypothesisTest can compute approximate the null distribution using permutation/bootstrapping. This way always is guaranteed to produce consistent results, however, it might take a long time as for each sample of the null distribution, the test statistic has to be computed for a different permutation of the data. Shogun's implementation is highly optimized, exploiting low-level CPU caching and multiple available cores." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "mmd.set_null_approximation_method(sg.NAM_PERMUTATION)\n", + "mmd.set_num_null_samples(200)\n", + "\n", + "# now show a couple of ways to compute the test\n", + "\n", + "# compute p-value for computed test statistic\n", + "p_value=mmd.compute_p_value(statistic)\n", + "print \"P-value of MMD value %.2f is %.2f\" % (statistic, p_value)\n", + "\n", + "# compute threshold for rejecting H_0 for a given test power\n", + "alpha=0.05\n", + "threshold=mmd.compute_threshold(alpha)\n", + "print \"Threshold for rejecting H0 with a test power of %.2f is %.2f\" % (alpha, threshold)\n", + "\n", + "# performing the test by hand given the above results, note that those two are equivalent\n", + "if statistic>threshold:\n", + " print \"H0 is rejected with confidence %.2f\" % alpha\n", + " \n", + "if p_valueCHypothesisTest to sample from the null distribution (permutations, re-computing of test statistic is done internally). For the alternative distribution, compute the test statistic for a new sample set of $X$ and $Y$ in a loop. Note that the latter is expensive, as the kernel cannot be precomputed, and infinite data is needed. Though it is not needed in practice but only for illustrational purposes here." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "num_samples=500\n", + "\n", + "# sample null distribution\n", + "null_samples=mmd.sample_null()\n", + "\n", + "# sample alternative distribution, generate new data for that\n", + "alt_samples=np.zeros(num_samples)\n", + "for i in range(num_samples):\n", + " X=norm.rvs(size=n, loc=mu, scale=sigma2)\n", + " Y=laplace.rvs(size=n, loc=mu, scale=b)\n", + " feat_p=sg.RealFeatures(np.reshape(X, (1,len(X))))\n", + " feat_q=sg.RealFeatures(np.reshape(Y, (1,len(Y))))\n", + " # TODO: reset pre-computed kernel here\n", + " mmd.set_p(feat_p)\n", + " mmd.set_q(feat_q)\n", + " alt_samples[i]=mmd.compute_statistic()\n", + "\n", + "np.std(alt_samples)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Null and Alternative Distribution Illustrated" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Visualise both distributions, $H_0:p=q$ is rejected if a sample from the alternative distribution is larger than the $(1-\\alpha)$-quantil of the null distribution. See [1] for more details on their forms. From the visualisations, we can read off the test's type I and type II error:\n", + "\n", + " * type I error is the area of the null distribution being right of the threshold\n", + " * type II error is the area of the alternative distribution being left from the threshold" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "def plot_alt_vs_null(alt_samples, null_samples, alpha):\n", + " plt.figure(figsize=(18,5))\n", + " \n", + " plt.subplot(131)\n", + " plt.hist(null_samples, 50, color='blue')\n", + " plt.title('Null distribution')\n", + " plt.subplot(132)\n", + " plt.title('Alternative distribution')\n", + " plt.hist(alt_samples, 50, color='green')\n", + " \n", + " plt.subplot(133)\n", + " plt.hist(null_samples, 50, color='blue')\n", + " plt.hist(alt_samples, 50, color='green', alpha=0.5)\n", + " plt.title('Null and alternative distriution')\n", + " \n", + " # find (1-alpha) element of null distribution\n", + " null_samples_sorted=np.sort(null_samples)\n", + " quantile_idx=int(num_samples*(1-alpha))\n", + " quantile=null_samples_sorted[quantile_idx]\n", + " plt.axvline(x=quantile, ymin=0, ymax=100, color='red', label=str(int(round((1-alpha)*100))) + '% quantile of null')\n", + " legend();" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "plot_alt_vs_null(alt_samples, null_samples, alpha)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Different Ways to Approximate the Null Distribution for the Quadratic Time MMD" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "As already mentioned, permuting the data to access the null distribution is probably the method of choice, due to the efficient implementation in Shogun. There exist a couple of methods that are more sophisticated (and slower) and either allow very fast approximations without guarantees or reasonably fast approximations that are consistent. We present a selection from [2], which are implemented in Shogun.\n", + "\n", + "The first one is a spectral method that is based around the Eigenspectrum of the kernel matrix of the joint samples. It is faster than bootstrapping while being a consistent test. Effectively, the null-distribution of the biased statistic is sampled, but in a more efficient way than the bootstrapping approach. The converges as\n", + "\n", + "$$\n", + "m\\mmd^2_b \\rightarrow \\sum_{l=1}^\\infty \\lambda_l z_l^2\n", + "$$\n", + "\n", + "where $z_l\\sim \\mathcal{N}(0,2)$ are i.i.d. normal samples and $\\lambda_l$ are Eigenvalues of expression 2 in [2], which can be empirically estimated by $\\hat\\lambda_l=\\frac{1}{m}\\nu_l$ where $\\nu_l$ are the Eigenvalues of the centred kernel matrix of the joint samples $X$ and $Y$. The distribution above can be easily sampled. Shogun's implementation has two parameters:\n", + "\n", + " * Number of samples from null-distribution. The more, the more accurate.\n", + " * Number of Eigenvalues of the Eigen-decomposition of the kernel matrix to use. The more, the better the results get. However, the Eigen-spectrum of the joint gram matrix usually decreases very fast. Plotting the Spectrum can help. See [2] for details.\n", + "\n", + "If the kernel matrices are diagonal dominant, this method is likely to fail. For that and more details, see the original paper. Computational costs are likely to be larger than permutation testing, due to the efficient implementation of the latter: Eigenvalues of the gram matrix cost $\\mathcal{O}(m^3)$.\n", + "\n", + "Below, we illustrate how to sample the null distribution and perform two-sample testing with the Spectrum approximation in the class CQuadraticTimeMMD. This method only works with the biased statistic." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "# optional: plot spectrum of joint kernel matrix\n", + "\n", + "# TODO: it would be good if there was a way to extract the joint kernel matrix for all kernel tests\n", + "\n", + "# get joint feature object and compute kernel matrix and its spectrum\n", + "feats_p_q=mmd.get_p_and_q()\n", + "mmd.get_kernel().init(feats_p_q, feats_p_q)\n", + "K=mmd.get_kernel().get_kernel_matrix()\n", + "w,_=np.linalg.eig(K)\n", + "\n", + "# visualise K and its spectrum (only up to threshold)\n", + "plt.figure(figsize=(18,5))\n", + "plt.subplot(121)\n", + "plt.imshow(K, interpolation=\"nearest\")\n", + "plt.title(\"Kernel matrix K of joint data $X$ and $Y$\")\n", + "plt.subplot(122)\n", + "thresh=0.1\n", + "plt.plot(w[:len(w[w>thresh])])\n", + "title(\"Eigenspectrum of K until component %d\" % len(w[w>thresh]));" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "The above plot of the Eigenspectrum shows that the Eigenvalues are decaying extremely fast. We choose the number for the approximation such that all Eigenvalues bigger than some threshold are used. In this case, we will not loose a lot of accuracy while gaining a significant speedup. For slower decaying Eigenspectrums, this approximation might be more expensive." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "# threshold for eigenspectrum\n", + "thresh=0.1\n", + "\n", + "# compute number of eigenvalues to use\n", + "num_eigen=len(w[w>thresh])\n", + "\n", + "# finally, do the test, use biased statistic\n", + "mmd.set_statistic_type(sg.ST_BIASED_FULL)\n", + "\n", + "#tell Shogun to use spectrum approximation\n", + "mmd.set_null_approximation_method(sg.NAM_MMD2_SPECTRUM)\n", + "mmd.set_num_eigenvalues_spectrum(num_eigen)\n", + "mmd.set_num_samples_spectrum(num_samples)\n", + "\n", + "# the usual test interface\n", + "p_value_spectrum=mmd.perform_test()\n", + "print \"Spectrum: P-value of MMD test is %.2f\" % p_value_spectrum\n", + "\n", + "# compare with ground truth bootstrapping\n", + "mmd.set_null_approximation_method(sg.NAM_SPECTRUM)\n", + "mmd.set_num_null_samples(num_samples)\n", + "p_value_boot=mmd.perform_test()\n", + "print \"Bootstrapping: P-value of MMD test is %.2f\" % p_value_spectrum" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### The Gamma Moment Matching Approximation and Type I errors" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "$\\DeclareMathOperator{\\var}{var}$\n", + "Another method for approximating the null-distribution is by matching the first two moments of a Gamma distribution and then compute the quantiles of that. This does not result in a consistent test, but usually also gives good results while being very fast. However, there are distributions where the method fail. Therefore, the type I error should always be monitored. Described in [2]. It uses\n", + "\n", + "$$\n", + "m\\mmd_b(Z) \\sim \\frac{x^{\\alpha-1}\\exp(-\\frac{x}{\\beta})}{\\beta^\\alpha \\Gamma(\\alpha)}\n", + "$$\n", + "\n", + "where\n", + "\n", + "$$\n", + "\\alpha=\\frac{(\\textbf{E}(\\text{MMD}_b(Z)))^2}{\\var(\\text{MMD}_b(Z))} \\qquad \\text{and} \\qquad\n", + " \\beta=\\frac{m \\var(\\text{MMD}_b(Z))}{(\\textbf{E}(\\text{MMD}_b(Z)))^2}\n", + "$$\n", + "\n", + "Then, any threshold and p-value can be computed using the gamma distribution in the above expression. Computational costs are in $\\mathcal{O}(m^2)$. Note that the test is parameter free. It only works with the biased statistic." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "# tell Shogun to use gamma approximation\n", + "mmd.set_null_approximation_method(sg.NAM_MMD2_GAMMA)\n", + "\n", + "# the usual test interface\n", + "p_value_gamma=mmd.perform_test()\n", + "print \"Gamma: P-value of MMD test is %.2f\" % p_value_gamma\n", + "\n", + "# compare with ground truth bootstrapping\n", + "mmd.set_null_approximation_method(sg.NAM_PERMUTATION)\n", + "p_value_boot=mmd.perform_test()\n", + "print \"Bootstrapping: P-value of MMD test is %.2f\" % p_value_spectrum" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "As we can see, the above example was kind of unfortunate, as the approximation fails badly. We check the type I error to verify that. This works similar to sampling the alternative distribution: re-sample data (assuming infinite amounts), perform the test and average results. Below we compare type I errors or all methods for approximating the null distribution. This will take a while." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "# type I error is false alarm, therefore sample data under H0\n", + "num_trials=50\n", + "rejections_gamma=zeros(num_trials)\n", + "rejections_spectrum=zeros(num_trials)\n", + "rejections_bootstrap=zeros(num_trials)\n", + "num_samples=50\n", + "alpha=0.05\n", + "for i in range(num_trials):\n", + " X=norm.rvs(size=n, loc=mu, scale=sigma2)\n", + " Y=laplace.rvs(size=n, loc=mu, scale=b)\n", + " \n", + " # simulate H0 via merging samples before computing the \n", + " Z=hstack((X,Y))\n", + " X=Z[:len(X)]\n", + " Y=Z[len(X):]\n", + " feat_p=RealFeatures(reshape(X, (1,len(X))))\n", + " feat_q=RealFeatures(reshape(Y, (1,len(Y))))\n", + " \n", + " # gamma\n", + " mmd=QuadraticTimeMMD(kernel, feat_p, feat_q)\n", + " mmd.set_null_approximation_method(sg.NAM_MMD2_GAMMA)\n", + " mmd.set_statistic_type(ST_BIASED_FULL)\n", + " rejections_gamma[i]=mmd.perform_test(alpha)\n", + " \n", + " # spectrum\n", + " mmd=QuadraticTimeMMD(kernel, feat_p, feat_q)\n", + " mmd.set_null_approximation_method(sg.NAM_MMD2_SPECTRUM)\n", + " mmd.set_num_eigenvalues_spectrum(num_eigen)\n", + " mmd.set_num_samples_spectrum(num_samples)\n", + " mmd.set_statistic_type(ST_BIASED_FULL)\n", + " rejections_spectrum[i]=mmd.perform_test(alpha)\n", + " \n", + " # bootstrap (precompute kernel)\n", + " mmd=QuadraticTimeMMD(kernel, feat_p, feat_q)\n", + " p_and_q=mmd.get_p_and_q()\n", + " kernel.init(p_and_q, p_and_q)\n", + " precomputed_kernel=CustomKernel(kernel)\n", + " mmd.set_kernel(precomputed_kernel)\n", + " mmd.set_null_approximation_method(sg.NAM_PERMUTATION)\n", + " mmd.set_num_null_samples(num_samples)\n", + " mmd.set_statistic_type(ST_BIASED_FULL)\n", + " rejections_bootstrap[i]=mmd.perform_test(alpha)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "convergence_gamma=cumsum(rejections_gamma)/(arange(num_trials)+1)\n", + "convergence_spectrum=cumsum(rejections_spectrum)/(arange(num_trials)+1)\n", + "convergence_bootstrap=cumsum(rejections_bootstrap)/(arange(num_trials)+1)\n", + "\n", + "print \"Average rejection rate of H0 for Gamma is %.2f\" % mean(convergence_gamma)\n", + "print \"Average rejection rate of H0 for Spectrum is %.2f\" % mean(convergence_spectrum)\n", + "print \"Average rejection rate of H0 for Bootstrapping is %.2f\" % mean(rejections_bootstrap)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "We see that Gamma basically never rejects, which is inline with the fact that the p-value was massively overestimated above. Note that for the other tests, the p-value is also not at its desired value, but this is due to the low number of samples/repetitions in the above code. Increasing them leads to consistent type I errors." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Linear Time MMD on Gaussian Blobs" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "So far, we basically had to precompute the kernel matrix for reasonable runtimes. This is not possible for more than a few thousand points. The linear time MMD statistic, implemented in CLinearTimeMMD can help here, as it accepts data under the streaming interface CStreamingFeatures, which deliver data one-by-one.\n", + "\n", + "And it can do more cool things, for example choose the best single (or combined) kernel for you. But we need a more fancy dataset for that to show its power. We will use one of Shogun's streaming based data generator, CGaussianBlobsDataGenerator for that. This dataset consists of two distributions which are a grid of Gaussians where in one of them, the Gaussians are stretched and rotated. This dataset is regarded as challenging for two-sample testing." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "# paramters of dataset\n", + "m=20000\n", + "distance=10\n", + "stretch=5\n", + "num_blobs=3\n", + "angle=pi/4\n", + "\n", + "# these are streaming features\n", + "gen_p=GaussianBlobsDataGenerator(num_blobs, distance, 1, 0)\n", + "gen_q=GaussianBlobsDataGenerator(num_blobs, distance, stretch, angle)\n", + "\t\t\n", + "# stream some data and plot\n", + "num_plot=1000\n", + "features=gen_p.get_streamed_features(num_plot)\n", + "features=features.create_merged_copy(gen_q.get_streamed_features(num_plot))\n", + "data=features.get_feature_matrix()\n", + "\n", + "figure(figsize=(18,5))\n", + "subplot(121)\n", + "grid(True)\n", + "plot(data[0][0:num_plot], data[1][0:num_plot], 'r.', label='$x$')\n", + "title('$X\\sim p$')\n", + "subplot(122)\n", + "grid(True)\n", + "plot(data[0][num_plot+1:2*num_plot], data[1][num_plot+1:2*num_plot], 'b.', label='$x$', alpha=0.5)\n", + "_=title('$Y\\sim q$')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "We now describe the linear time MMD, as described in [1, Section 6], which is implemented in Shogun. A fast, unbiased estimate for the original MMD expression which still uses all available data can be obtained by dividing data into two parts and then compute\n", + "\n", + "$$\n", + "\\mmd_l^2[\\mathcal{F},X,Y]=\\frac{1}{m_2}\\sum_{i=1}^{m_2} k(x_{2i},x_{2i+1})+k(y_{2i},y_{2i+1})-k(x_{2i},y_{2i+1})-\n", + " k(x_{2i+1},y_{2i})\n", + "$$\n", + "\n", + "where $ m_2=\\lfloor\\frac{m}{2} \\rfloor$. While the above expression assumes that $m$ data are available from each distribution, the statistic in general works in an online setting where features are obtained one by one. Since only pairs of four points are considered at once, this allows to compute it on data streams. In addition, the computational costs are linear in the number of samples that are considered from each distribution. These two properties make the linear time MMD very applicable for large scale two-sample tests. In theory, any number of samples can be processed -- time is the only limiting factor.\n", + "\n", + "We begin by illustrating how to pass data to CLinearTimeMMD. In order not to loose performance due to overhead, it is possible to specify a block size for the data stream." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "block_size=100\n", + "\n", + "# if features are already under the streaming interface, just pass them\n", + "mmd=LinearTimeMMD(kernel, gen_p, gen_q, m, block_size)\n", + "\n", + "# compute an unbiased estimate in linear time\n", + "statistic=mmd.compute_statistic()\n", + "print \"MMD_l[X,Y]^2=%.2f\" % statistic\n", + "\n", + "# note: due to the streaming nature, successive calls of compute statistic use different data\n", + "# and produce different results. Data cannot be stored in memory\n", + "for _ in range(5):\n", + " print \"MMD_l[X,Y]^2=%.2f\" % mmd.compute_statistic()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Sometimes, one might want to use CLinearTimeMMD with data that is stored in memory. In that case, it is easy to data in the form of for example CStreamingDenseFeatures into CDenseFeatures." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "# data source\n", + "gen_p=GaussianBlobsDataGenerator(num_blobs, distance, 1, 0)\n", + "gen_q=GaussianBlobsDataGenerator(num_blobs, distance, stretch, angle)\n", + "\n", + "# retreive some points, store them as non-streaming data in memory\n", + "data_p=gen_p.get_streamed_features(100)\n", + "data_q=gen_q.get_streamed_features(data_p.get_num_vectors())\n", + "print \"Number of data is %d\" % data_p.get_num_vectors()\n", + "\n", + "# cast data in memory as streaming features again (which now stream from the in-memory data)\n", + "streaming_p=StreamingRealFeatures(data_p)\n", + "streaming_q=StreamingRealFeatures(data_q)\n", + "\n", + "# it is important to start the internal parser to avoid deadlocks\n", + "streaming_p.start_parser()\n", + "streaming_q.start_parser()\n", + "\n", + "# example to create mmd (note that m can be maximum the number of data in memory)\n", + "\n", + "mmd=LinearTimeMMD(GaussianKernel(10,1), streaming_p, streaming_q, data_p.get_num_vectors(), 1)\n", + "print \"Linear time MMD statistic: %.2f\" % mmd.compute_statistic()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### The Gaussian Approximation to the Null Distribution" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "As for any two-sample test in Shogun, bootstrapping can be used to approximate the null distribution. This results in a consistent, but slow test. The number of samples to take is the only parameter. Note that since CLinearTimeMMD operates on streaming features, *new* data is taken from the stream in every iteration.\n", + "\n", + "Bootstrapping is not really necessary since there exists a fast and consistent estimate of the null-distribution. However, to ensure that any approximation is accurate, it should always be checked against bootstrapping at least once.\n", + "\n", + "Since both the null- and the alternative distribution of the linear time MMD are Gaussian with equal variance (and different mean), it is possible to approximate the null-distribution by using a linear time estimate for this variance. An unbiased, linear time estimator for\n", + "\n", + "$$\n", + "\\var[\\mmd_l^2[\\mathcal{F},X,Y]]\n", + "$$\n", + "\n", + "can simply be computed by computing the empirical variance of\n", + "\n", + "$$\n", + "k(x_{2i},x_{2i+1})+k(y_{2i},y_{2i+1})-k(x_{2i},y_{2i+1})-k(x_{2i+1},y_{2i}) \\qquad (1\\leq i\\leq m_2)\n", + "$$\n", + "\n", + "A normal distribution with this variance and zero mean can then be used as an approximation for the null-distribution. This results in a consistent test and is very fast. However, note that it is an approximation and its accuracy depends on the underlying data distributions. It is a good idea to compare to the bootstrapping approach first to determine an appropriate number of samples to use. This number is usually in the tens of thousands.\n", + "\n", + "CLinearTimeMMD allows to approximate the null distribution in the same pass as computing the statistic itself (in linear time). This should always be used in practice since seperate calls of computing statistic and p-value will operator on different data from the stream. Below, we compute the test on a large amount of data (impossible to perform quadratic time MMD for this one as the kernel matrices cannot be stored in memory)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "mmd=LinearTimeMMD(kernel, gen_p, gen_q, m, block_size)\n", + "print \"m=%d samples from p and q\" % m\n", + "print \"Binary test result is: \" + (\"Rejection\" if mmd.perform_test(alpha) else \"No rejection\")\n", + "print \"P-value test result is %.2f\" % mmd.perform_test()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Kernel Selection for the MMD -- Overview" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "$\\DeclareMathOperator*{\\argmin}{arg\\,min}\n", + "\\DeclareMathOperator*{\\argmax}{arg\\,max}$\n", + "Now which kernel do we actually use for our tests? So far, we just plugged in arbritary ones. However, for kernel two-sample testing, it is possible to do something more clever.\n", + "\n", + "Shogun's kernel selection methods for MMD based two-sample tests are all based around [3, 4]. For the CLinearTimeMMD, [3] describes a way of selecting the *optimal* kernel in the sense that the test's type II error is minimised. For the linear time MMD, this is the method of choice. It is done via maximising the MMD statistic divided by its standard deviation and it is possible for single kernels and also for convex combinations of them. For the CQuadraticTimeMMD, the best method in literature is choosing the kernel that maximised the MMD statistic [4]. For convex combinations of kernels, this can be achieved via a $L2$ norm constraint. A detailed comparison of all methods on numerous datasets can be found in [5].\n", + "\n", + "MMD Kernel selection in Shogun always involves coosing a one of the methods of CGaussianKernel All methods compute their results for a fixed set of these baseline kernels. We later give an example how to use these classes after providing a list of available methods.\n", + "\n", + " * *KSM_MEDIAN_HEURISTIC*: Selects from a set CGaussianKernel instances the one whose width parameter is closest to the median of the pairwise distances in the data. The median is computed on a certain number of points from each distribution that can be specified as a parameter. Since the median is a stable statistic, one does not have to compute all pairwise distances but rather just a few thousands. This method a useful (and fast) heuristic that in many cases gives a good hint on where to start looking for Gaussian kernel widths. It is for example described in [1]. Note that it may fail badly in selecting a good kernel for certain problems.\n", + "\n", + " * *KSM_MAXIMIZE_MMD*: Selects from a set of arbitrary baseline kernels a single one that maximises the used MMD statistic -- more specific its estimate.\n", + "$$\n", + "k^*=\\argmax_{k\\in\\mathcal{K}} \\hat \\eta_k,\n", + "$$\n", + "where $\\eta_k$ is an empirical MMD estimate for using a kernel $k$.\n", + "This was first described in [4] and was empirically shown to perform better than the median heuristic above. However, it remains a heuristic that comes with no guarantees. Since MMD estimates can be computed in linear and quadratic time, this method works for both methods. However, for the linear time statistic, there exists a better method.\n", + " \n", + " * *KSM_MAXIMIZE_POWER*: Selects the optimal single kernel from a set of baseline kernels. This is done via maximising the ratio of the linear MMD statistic and its standard deviation.\n", + "$$\n", + "k^*=\\argmax_{k\\in\\mathcal{K}} \\frac{\\hat \\eta_k}{\\hat\\sigma_k+\\lambda},\n", + "$$\n", + "where $\\eta_k$ is a linear time MMD estimate for using a kernel $k$ and $\\hat\\sigma_k$ is a linear time variance estimate of $\\eta_k$ to which a small number $\\lambda$ is added to prevent division by zero.\n", + "These are estimated in a linear time way with the streaming framework that was described earlier. Therefore, this method is only available for CLinearTimeMMD. Optimal here means that the resulting test's type II error is minimised for a fixed type I error. *Important:* For this method to work, the kernel needs to be selected on *different* data than the test is performed on. Otherwise, the method will produce wrong results.\n", + " \n", + " * CMMDKernelSelectionCombMaxL2 Selects a convex combination of kernels that maximises the MMD statistic. This is the multiple kernel analogous to CMMDKernelSelectionMax. This is done via solving the convex program\n", + "$$\n", + "\\boldsymbol{\\beta}^*=\\min_{\\boldsymbol{\\beta}} \\{\\boldsymbol{\\beta}^T\\boldsymbol{\\beta} : \\boldsymbol{\\beta}^T\\boldsymbol{\\eta}=\\mathbf{1}, \\boldsymbol{\\beta}\\succeq 0\\},\n", + "$$\n", + "where $\\boldsymbol{\\beta}$ is a vector of the resulting kernel weights and $\\boldsymbol{\\eta}$ is a vector of which each component contains a MMD estimate for a baseline kernel. See [3] for details. Note that this method is unable to select a single kernel -- even when this would be optimal.\n", + "Again, when using the linear time MMD, there are better methods available.\n", + "\n", + " * CMMDKernelSelectionCombOpt Selects a convex combination of kernels that maximises the MMD statistic divided by its covariance. This corresponds to \\emph{optimal} kernel selection in the same sense as in class CMMDKernelSelectionOpt and is its multiple kernel analogous. The convex program to solve is\n", + "$$\n", + "\\boldsymbol{\\beta}^*=\\min_{\\boldsymbol{\\beta}} (\\hat Q+\\lambda I) \\{\\boldsymbol{\\beta}^T\\boldsymbol{\\beta} : \\boldsymbol{\\beta}^T\\boldsymbol{\\eta}=\\mathbf{1}, \\boldsymbol{\\beta}\\succeq 0\\},\n", + "$$\n", + "where again $\\boldsymbol{\\beta}$ is a vector of the resulting kernel weights and $\\boldsymbol{\\eta}$ is a vector of which each component contains a MMD estimate for a baseline kernel. The matrix $\\hat Q$ is a linear time estimate of the covariance matrix of the vector $\\boldsymbol{\\eta}$ to whose diagonal a small number $\\lambda$ is added to prevent division by zero. See [3] for details. In contrast to CMMDKernelSelectionCombMaxL2, this method is able to select a single kernel when this gives a lower type II error than a combination. In this sense, it contains CMMDKernelSelectionOpt." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### MMD Kernel Selection in Shogun" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "In order to use one of the above methods for kernel selection, one has to create a new instance of CCombinedKernel append all desired baseline kernels to it. This combined kernel is then passed to the MMD class. Then, an object of any of the above kernel selection methods is created and the MMD instance is passed to it in the constructor. There are then multiple methods to call\n", + "\n", + " * *compute_measures* to compute a vector kernel selection criteria if a single kernel selection method is used. It will return a vector of selected kernel weights if a combined kernel selection method is used. For \\shogunclass{CMMDKernelSelectionMedian}, the method does throw an error.\n", + "\n", + " * *select\\_kernel* returns the selected kernel of the method. For single kernels this will be one of the baseline kernel instances. For the combined kernel case, this will be the underlying CCombinedKernel instance where the subkernel weights are set to the weights that were selected by the method. \n", + "\n", + "In order to utilise the selected kernel, it has to be passed to an MMD instance. We now give an example how to select the optimal single and combined kernel for the Gaussian Blobs dataset." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "What is the best kernel to use here? This is tricky since the distinguishing characteristics are hidden at a small length-scale. Create some kernels to select the best from" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "sigmas=[2**x for x in linspace(-5,5, 10)]\n", + "print \"Choosing kernel width from\", [\"{0:.2f}\".format(sigma) for sigma in sigmas]\n", + "combined=CombinedKernel()\n", + "for i in range(len(sigmas)):\n", + " combined.append_kernel(GaussianKernel(10, sigmas[i]))\n", + "\n", + "# mmd instance using streaming features\n", + "block_size=1000\n", + "mmd=LinearTimeMMD(combined, gen_p, gen_q, m, block_size)\n", + "\n", + "# optmal kernel choice is possible for linear time MMD\n", + "selection=MMDKernelSelectionOpt(mmd)\n", + "\n", + "# select best kernel\n", + "best_kernel=selection.select_kernel()\n", + "best_kernel=GaussianKernel.obtain_from_generic(best_kernel)\n", + "print \"Best single kernel has bandwidth %.2f\" % best_kernel.get_width()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Now perform two-sample test with that kernel" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "alpha=0.05\n", + "mmd=LinearTimeMMD(best_kernel, gen_p, gen_q, m, block_size)\n", + "mmd.set_null_approximation_method(MMD1_GAUSSIAN);\n", + "p_value_best=mmd.perform_test();\n", + "\n", + "print \"Bootstrapping: P-value of MMD test with optimal kernel is %.2f\" % p_value_best" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "For the linear time MMD, the null and alternative distributions look different than for the quadratic time MMD as plotted above. Let's sample them (takes longer, reduce number of samples a bit). Note how we can tell the linear time MMD to smulate the null hypothesis, which is necessary since we cannot permute by hand as samples are not in memory)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "mmd=LinearTimeMMD(best_kernel, gen_p, gen_q, 5000, block_size)\n", + "num_samples=500\n", + "\n", + "# sample null and alternative distribution, implicitly generate new data for that\n", + "null_samples=zeros(num_samples)\n", + "alt_samples=zeros(num_samples)\n", + "for i in range(num_samples):\n", + " alt_samples[i]=mmd.compute_statistic()\n", + " \n", + " # tell MMD to merge data internally while streaming\n", + " mmd.set_simulate_h0(True)\n", + " null_samples[i]=mmd.compute_statistic()\n", + " mmd.set_simulate_h0(False)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "And visualise again. Note that both null and alternative distribution are Gaussian, which allows the fast null distribution approximation and the optimal kernel selection" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [ + "plot_alt_vs_null(alt_samples, null_samples, alpha)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Soon to come:" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + " * Two-sample tests on strings\n", + " * Two-sample tests on audio data (quite fun)\n", + " * Testing for independence with the Hilbert Schmidt Independence Criterion" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## References" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "[1]: Gretton, A., Borgwardt, K. M., Rasch, M. J., Schölkopf, B., & Smola, A. (2012). A Kernel Two-Sample Test. Journal of Machine Learning Research, 13, 671–721.\n", + "\n", + "[2]: Gretton, A., Fukumizu, K., Harchaoui, Z., & Sriperumbudur, B. K. (2012). A fast, consistent kernel two-sample test. In Advances in Neural Information Processing Systems (pp. 673–681).\n", + "\n", + "[3]: Gretton, A., Sriperumbudur, B., Sejdinovic, D., Strathmann, H., Balakrishnan, S., Pontil, M., & Fukumizu, K. (2012). Optimal kernel choice for large-scale two-sample tests. In Advances in Neural Information Processing Systems.\n", + "\n", + "[4]: Sriperumbudur, B., Fukumizu, K., Gretton, A., Lanckriet, G. R. G., & Schölkopf, B. (2009). Kernel choice and classifiability for RKHS embeddings of probability distributions. In Advances in Neural Information Processing Systems\n", + "\n", + "[5]: Strathmann, H. (2012). M.Sc. Adaptive Large-Scale Kernel Two-Sample Testing. University College London." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "collapsed": false + }, + "outputs": [], + "source": [] + } + ], "metadata": { - "name": "", - "signature": "sha256:b260843b0fa32020bbbc330568c3c141b306f6b9f371d43bc346e5a0d840fb43" - }, - "nbformat": 3, - "nbformat_minor": 0, - "worksheets": [ - { - "cells": [ - { - "cell_type": "heading", - "level": 1, - "metadata": {}, - "source": [ - "Kernel hypothesis testing in Shogun" - ] - }, - { - "cell_type": "heading", - "level": 4, - "metadata": {}, - "source": [ - "By Heiko Strathmann - heiko.strathmann@gmail.com - github.com/karlnapf - herrstrathmann.de" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "This notebook describes Shogun's framework for statistical hypothesis testing. We begin by giving a brief outline of the problem setting and then describe various implemented algorithms. All the algorithms discussed here are for Kernel two-sample testing with Maximum Mean Discrepancy and are based on embedding probability distributions into Reproducing Kernel Hilbert Spaces( RKHS )." - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Methods for two-sample testing currently consist of tests based on the *Maximum Mean Discrepancy*. There are two types of tests available, a quadratic time test and a linear time test. Both come in various flavours.\n", - "Independence testing is currently based in the *Hilbert Schmidt Independence Criterion*." - ] - }, - { - "cell_type": "code", - "collapsed": false, - "input": [ - "%pylab inline\n", - "%matplotlib inline\n", - "# import all Shogun classes\n", - "from modshogun import *" - ], - "language": "python", - "metadata": {}, - "outputs": [] - }, - { - "cell_type": "heading", - "level": 2, - "metadata": {}, - "source": [ - "Some Formal Basics (skip if you just want code examples)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "To set the context, we here briefly describe statistical hypothesis testing. Informally, one defines a hypothesis on a certain domain and then uses a statistical test to check whether this hypothesis is true. Formally, the goal is to reject a so-called *null-hypothesis* $H_0$, which is the complement of an *alternative-hypothesis* $H_A$. \n", - "\n", - "To distinguish the hypotheses, a test statistic is computed on sample data. Since sample data is finite, this corresponds to sampling the true distribution of the test statistic. There are two different distributions of the test statistic -- one for each hypothesis. The *null-distribution* corresponds to test statistic samples under the model that $H_0$ holds; the *alternative-distribution* corresponds to test statistic samples under the model that $H_A$ holds.\n", - "\n", - "In practice, one tries to compute the quantile of the test statistic in the null-distribution. In case the test statistic is in a high quantile, i.e. it is unlikely that the null-distribution has generated the test statistic -- the null-hypothesis $H_0$ is rejected.\n", - "\n", - "There are two different kinds of errors in hypothesis testing:\n", - "\n", - " * A *type I error* is made when $H_0: p=q$ is wrongly rejected. That is, the test says that the samples are from different distributions when they are not.\n", - " * A *type II error* is made when $H_A: p\\neq q$ is wrongly accepted. That is, the test says that the samples are from the same distribution when they are not.\n", - "\n", - "A so-called *consistent* test achieves zero type II error for a fixed type I error.\n", - "\n", - "To decide whether to reject $H_0$, one could set a threshold, say at the $95\\%$ quantile of the null-distribution, and reject $H_0$ when the test statistic lies below that threshold. This means that the chance that the samples were generated under $H_0$ are $5\\%$. We call this number the *test power* $\\alpha$ (in this case $\\alpha=0.05$). It is an upper bound on the probability for a type I error. An alternative way is simply to compute the quantile of the test statistic in the null-distribution, the so-called *p-value*, and to compare the p-value against a desired test power, say $\\alpha=0.05$, by hand. The advantage of the second method is that one not only gets a binary answer, but also an upper bound on the type I error.\n", - "\n", - "In order to construct a two-sample test, the null-distribution of the test statistic has to be approximated. One way of doing this for any two-sample test is called *bootstrapping*, or the *permutation* test, where samples from both sources are mixed and permuted repeatedly and the test statistic is computed for every of those configurations. While this method works for every statistical hypothesis test, it might be very costly because the test statistic has to be re-computed many times. For many test statistics, there are more sophisticated methods of approximating the null distribution." - ] - }, - { - "cell_type": "heading", - "level": 2, - "metadata": {}, - "source": [ - "Base class for Hypothesis Testing" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Shogun implements statistical testing in the abstract class CHypothesisTest. All implemented methods will work with this interface at their most basic level. This class offers methods to\n", - "\n", - " * compute the implemented test statistic,\n", - " * compute p-values for a given value of the test statistic,\n", - " * compute a test threshold for a given p-value,\n", - " * sampling the null distribution, i.e. perform the permutation test or bootstrappig of the null-distribution, and\n", - " * performing a full two-sample test, and either returning a p-value or a binary rejection decision. This method is most useful in practice. Note that, depending on the used test statistic, it might be faster to call this than to compute threshold and test statistic seperately with the above methods.\n", - " \n", - "There are special subclasses for testing two distributions against each other (CTwoSampleTest, CIndependenceTest), kernel two-sample testing (CKernelTwoSampleTest), and kernel independence testing (CKernelIndependenceTest), which however mostly differ in internals and constructors." - ] - }, - { - "cell_type": "heading", - "level": 2, - "metadata": {}, - "source": [ - "Kernel Two-Sample Testing with the Maximum Mean Discrepancy" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "$\\DeclareMathOperator{\\mmd}{MMD}$\n", - "An important class of hypothesis tests are the *two-sample tests*. \n", - "In two-sample testing, one tries to find out whether two sets of samples come from different distributions. Given two probability distributions $p,q$ on some *arbritary* domains $\\mathcal{X}, \\mathcal{Y}$ respectively, and i.i.d. samples $X=\\{x_i\\}_{i=1}^m\\subseteq \\mathcal{X}\\sim p$ and $Y=\\{y_i\\}_{i=1}^n\\subseteq \\mathcal{Y}\\sim p$, the two sample test distinguishes the hypothesises\n", - "\n", - "\\begin{align*}\n", - "H_0: p=q\\\\\n", - "H_A: p\\neq q\n", - "\\end{align*}\n", - "\n", - "In order to solve this problem, it is desirable to have a criterion than takes a positive unique value if $p\\neq q$, and zero if and only if $p=q$. The so called *Maximum Mean Discrepancy (MMD)*, has this property and allows to distinguish any two probability distributions, if used in a *reproducing kernel Hilbert space (RKHS)*. It is the distance of the mean embeddings $\\mu_p, \\mu_q$ of the distributions $p,q$ in such a RKHS $\\mathcal{F}$ -- which can also be expressed in terms of expectation of kernel functions, i.e.\n", - "\n", - "\\begin{align*}\n", - "\\mmd[\\mathcal{F},p,q]&=||\\mu_p-\\mu_q||_\\mathcal{F}^2\\\\\n", - "&=\\textbf{E}_{x,x'}\\left[ k(x,x')\\right]-\n", - " 2\\textbf{E}_{x,y}\\left[ k(x,y)\\right]\n", - " +\\textbf{E}_{y,y'}\\left[ k(y,y')\\right]\n", - "\\end{align*}\n", - "\n", - "Note that this formulation does not assume any form of the input data, we just need a kernel function whose feature space is a RKHS, see [2, Section 2] for details. This has the consequence that in Shogun, we can do tests on any type of data (CDenseFeatures, CSparseFeatures, CStringFeatures, etc), as long as we or you provide a positive definite kernel function under the interface of CKernel.\n", - "\n", - "We here only describe how to use the MMD for two-sample testing. Shogun offers two types of test statistic based on the MMD, one with quadratic costs both in time and space, and one with linear time and constant space costs. Both come in different versions and with different methods how to approximate the null-distribution in order to construct a two-sample test." - ] - }, - { - "cell_type": "heading", - "level": 3, - "metadata": {}, - "source": [ - "Running Example Data. Gaussian vs. Laplace" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "In order to illustrate kernel two-sample testing with Shogun, we use a couple of toy distributions. The first dataset we consider is the 1D Standard Gaussian\n", - "\n", - "$p(x)=\\frac{1}{\\sqrt{2\\pi\\sigma^2}}\\exp\\left(-\\frac{(x-\\mu)^2}{\\sigma^2}\\right)$\n", - "\n", - "with mean $\\mu$ and variance $\\sigma^2$, which is compared against the 1D Laplace distribution\n", - "\n", - "$p(x)=\\frac{1}{2b}\\exp\\left(-\\frac{|x-\\mu|}{b}\\right)$\n", - "\n", - "with the same mean $\\mu$ and variance $2b^2$. In order to increase difficulty, we set $b=\\sqrt{\\frac{1}{2}}$, which means that $2b^2=\\sigma^2=1$." - ] - }, - { - "cell_type": "code", - "collapsed": false, - "input": [ - "# use scipy for generating samples\n", - "from scipy.stats import norm, laplace\n", - "\n", - "def sample_gaussian_vs_laplace(n=220, mu=0.0, sigma2=1, b=sqrt(0.5)): \n", - " # sample from both distributions\n", - " X=norm.rvs(size=n, loc=mu, scale=sigma2)\n", - " Y=laplace.rvs(size=n, loc=mu, scale=b)\n", - " \n", - " return X,Y" - ], - "language": "python", - "metadata": {}, - "outputs": [] - }, - { - "cell_type": "code", - "collapsed": false, - "input": [ - "mu=0.0\n", - "sigma2=1\n", - "b=sqrt(0.5)\n", - "n=220\n", - "X,Y=sample_gaussian_vs_laplace(n, mu, sigma2, b)\n", - "\n", - "# plot both densities and histograms\n", - "figure(figsize=(18,5))\n", - "suptitle(\"Gaussian vs. Laplace\")\n", - "subplot(121)\n", - "Xs=linspace(-2, 2, 500)\n", - "plot(Xs, norm.pdf(Xs, loc=mu, scale=sigma2))\n", - "plot(Xs, laplace.pdf(Xs, loc=mu, scale=b))\n", - "title(\"Densities\")\n", - "xlabel(\"$x$\")\n", - "ylabel(\"$p(x)$\")\n", - "_=legend([ 'Gaussian','Laplace'])\n", - "\n", - "subplot(122)\n", - "hist(X, alpha=0.5)\n", - "xlim([-5,5])\n", - "ylim([0,100])\n", - "hist(Y,alpha=0.5)\n", - "xlim([-5,5])\n", - "ylim([0,100])\n", - "legend([\"Gaussian\", \"Laplace\"])\n", - "_=title('Histograms')" - ], - "language": "python", - "metadata": {}, - "outputs": [] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Now how to compare these two sets of samples? Clearly, a t-test would be a bad idea since it basically compares mean and variance of $X$ and $Y$. But we set that to be equal. By chance, the estimates of these statistics might differ, but that is unlikely to be significant. Thus, we have to look at higher order statistics of the samples. In fact, kernel two-sample tests look at *all* (infinitely many) higher order moments." - ] - }, - { - "cell_type": "code", - "collapsed": false, - "input": [ - "print \"Gaussian vs. Laplace\"\n", - "print \"Sample means: %.2f vs %.2f\" % (mean(X), mean(Y))\n", - "print \"Samples variances: %.2f vs %.2f\" % (var(X), var(Y))" - ], - "language": "python", - "metadata": {}, - "outputs": [] - }, - { - "cell_type": "heading", - "level": 3, - "metadata": {}, - "source": [ - "Quadratic Time MMD" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "We now describe the quadratic time MMD, as described in [1, Lemma 6], which is implemented in Shogun. All methods in this section are implemented in CQuadraticTimeMMD, which accepts any type of features in Shogun, and use it on the above toy problem.\n", - "\n", - "An unbiased estimate for the MMD expression above can be obtained by estimating expected values with averaging over independent samples\n", - "\n", - "$$\n", - "\\mmd_u[\\mathcal{F},X,Y]^2=\\frac{1}{m(m-1)}\\sum_{i=1}^m\\sum_{j\\neq i}^mk(x_i,x_j) + \\frac{1}{n(n-1)}\\sum_{i=1}^n\\sum_{j\\neq i}^nk(y_i,y_j)-\\frac{2}{mn}\\sum_{i=1}^m\\sum_{j\\neq i}^nk(x_i,y_j)\n", - "$$\n", - "\n", - "A biased estimate would be\n", - "\n", - "$$\n", - "\\mmd_b[\\mathcal{F},X,Y]^2=\\frac{1}{m^2}\\sum_{i=1}^m\\sum_{j=1}^mk(x_i,x_j) + \\frac{1}{n^ 2}\\sum_{i=1}^n\\sum_{j=1}^nk(y_i,y_j)-\\frac{2}{mn}\\sum_{i=1}^m\\sum_{j\\neq i}^nk(x_i,y_j)\n", - ".$$\n", - "\n", - "Computing the test statistic using CQuadraticTimeMMD does exactly this, where it is possible to choose between the two above expressions. Note that some methods for approximating the null-distribution only work with one of both types. Both statistics' computational costs are quadratic both in time and space. Note that the method returns $m\\mmd_b[\\mathcal{F},X,Y]^2$ since null distribution approximations work on $m$ times null distribution. Here is how the test statistic itself is computed." - ] - }, - { - "cell_type": "code", - "collapsed": false, - "input": [ - "# turn data into Shogun representation (columns vectors)\n", - "feat_p=RealFeatures(X.reshape(1,len(X)))\n", - "feat_q=RealFeatures(Y.reshape(1,len(Y)))\n", - "\n", - "# choose kernel for testing. Here: Gaussian\n", - "kernel_width=1\n", - "kernel=GaussianKernel(10, kernel_width)\n", - "\n", - "# create mmd instance of test-statistic\n", - "mmd=QuadraticTimeMMD(kernel, feat_p, feat_q)\n", - "\n", - "# compute biased and unbiased test statistic (default is unbiased)\n", - "mmd.set_statistic_type(BIASED)\n", - "biased_statistic=mmd.compute_statistic()\n", - "\n", - "mmd.set_statistic_type(UNBIASED)\n", - "unbiased_statistic=mmd.compute_statistic()\n", - "\n", - "print \"%d x MMD_b[X,Y]^2=%.2f\" % (len(X), biased_statistic)\n", - "print \"%d x MMD_u[X,Y]^2=%.2f\" % (len(X), unbiased_statistic)" - ], - "language": "python", - "metadata": {}, - "outputs": [] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Any sub-class of CHypothesisTest can compute approximate the null distribution using permutation/bootstrapping. This way always is guaranteed to produce consistent results, however, it might take a long time as for each sample of the null distribution, the test statistic has to be computed for a different permutation of the data. Note that each of the below calls samples from the null distribution. It is wise to choose one method in practice. Also note that we set the number of samples from the null distribution to a low value to reduce runtime. Choose larger in practice, it is in fact good to plot the samples." - ] - }, - { - "cell_type": "code", - "collapsed": false, - "input": [ - "# this is not necessary as bootstrapping is the default\n", - "mmd.set_null_approximation_method(PERMUTATION)\n", - "mmd.set_statistic_type(UNBIASED)\n", - "\n", - "# to reduce runtime, should be larger practice\n", - "mmd.set_num_null_samples(100)\n", - "\n", - "# now show a couple of ways to compute the test\n", - "\n", - "# compute p-value for computed test statistic\n", - "p_value=mmd.compute_p_value(unbiased_statistic)\n", - "print \"P-value of MMD value %.2f is %.2f\" % (unbiased_statistic, p_value)\n", - "\n", - "# compute threshold for rejecting H_0 for a given test power\n", - "alpha=0.05\n", - "threshold=mmd.compute_threshold(alpha)\n", - "print \"Threshold for rejecting H0 with a test power of %.2f is %.2f\" % (alpha, threshold)\n", - "\n", - "# performing the test by hand given the above results, note that those two are equivalent\n", - "if unbiased_statistic>threshold:\n", - " print \"H0 is rejected with confidence %.2f\" % alpha\n", - " \n", - "if p_valueCCustomKernel class, which allows to precompute a kernel matrix (multithreaded) of a given kernel and store it in memory. Instances of this class can then be used as if they were standard kernels." - ] - }, - { - "cell_type": "code", - "collapsed": false, - "input": [ - "# precompute kernel to be faster for null sampling\n", - "p_and_q=mmd.get_p_and_q()\n", - "kernel.init(p_and_q, p_and_q);\n", - "precomputed_kernel=CustomKernel(kernel);\n", - "mmd.set_kernel(precomputed_kernel);\n", - "\n", - "# increase number of iterations since should be faster now\n", - "mmd.set_num_null_samples(500);\n", - "p_value_boot=mmd.perform_test();\n", - "print \"P-value of MMD test is %.2f\" % p_value_boot" - ], - "language": "python", - "metadata": {}, - "outputs": [] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Now let us visualise distribution of MMD statistic under $H_0:p=q$ and $H_A:p\\neq q$. Sample both null and alternative distribution for that. Use the interface of CTwoSampleTest to sample from the null distribution (permutations, re-computing of test statistic is done internally). For the alternative distribution, compute the test statistic for a new sample set of $X$ and $Y$ in a loop. Note that the latter is expensive, as the kernel cannot be precomputed, and infinite data is needed. Though it is not needed in practice but only for illustrational purposes here." - ] - }, - { - "cell_type": "code", - "collapsed": false, - "input": [ - "num_samples=500\n", - "\n", - "# sample null distribution\n", - "null_samples=mmd.sample_null(num_samples)\n", - "\n", - "# sample alternative distribution, generate new data for that\n", - "alt_samples=zeros(num_samples)\n", - "for i in range(num_samples):\n", - " X=norm.rvs(size=n, loc=mu, scale=sigma2)\n", - " Y=laplace.rvs(size=n, loc=mu, scale=b)\n", - " feat_p=RealFeatures(reshape(X, (1,len(X))))\n", - " feat_q=RealFeatures(reshape(Y, (1,len(Y))))\n", - " mmd=QuadraticTimeMMD(kernel, feat_p, feat_q)\n", - " alt_samples[i]=mmd.compute_statistic()" - ], - "language": "python", - "metadata": {}, - "outputs": [] - }, - { - "cell_type": "heading", - "level": 4, - "metadata": {}, - "source": [ - "Null and Alternative Distribution Illustrated" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Visualise both distributions, $H_0:p=q$ is rejected if a sample from the alternative distribution is larger than the $(1-\\alpha)$-quantil of the null distribution. See [1] for more details on their forms. From the visualisations, we can read off the test's type I and type II error:\n", - "\n", - " * type I error is the area of the null distribution being right of the threshold\n", - " * type II error is the area of the alternative distribution being left from the threshold" - ] - }, - { - "cell_type": "code", - "collapsed": false, - "input": [ - "def plot_alt_vs_null(alt_samples, null_samples, alpha):\n", - " figure(figsize=(18,5))\n", - " \n", - " subplot(131)\n", - " hist(null_samples, 50, color='blue')\n", - " title('Null distribution')\n", - " subplot(132)\n", - " title('Alternative distribution')\n", - " hist(alt_samples, 50, color='green')\n", - " \n", - " subplot(133)\n", - " hist(null_samples, 50, color='blue')\n", - " hist(alt_samples, 50, color='green', alpha=0.5)\n", - " title('Null and alternative distriution')\n", - " \n", - " # find (1-alpha) element of null distribution\n", - " null_samples_sorted=sort(null_samples)\n", - " quantile_idx=int(num_samples*(1-alpha))\n", - " quantile=null_samples_sorted[quantile_idx]\n", - " axvline(x=quantile, ymin=0, ymax=100, color='red', label=str(int(round((1-alpha)*100))) + '% quantile of null')\n", - " _=legend()" - ], - "language": "python", - "metadata": {}, - "outputs": [] - }, - { - "cell_type": "code", - "collapsed": false, - "input": [ - "plot_alt_vs_null(alt_samples, null_samples, alpha)" - ], - "language": "python", - "metadata": {}, - "outputs": [] - }, - { - "cell_type": "heading", - "level": 4, - "metadata": {}, - "source": [ - "Different Ways to Approximate the Null Distribution for the Quadratic Time MMD" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "As already mentioned, bootstrapping the null distribution is expensive business. There exist a couple of methods that are more sophisticated and either allow very fast approximations without guarantees or reasonably fast approximations that are consistent. We present a selection from [2], which are implemented in Shogun.\n", - "\n", - "The first one is a spectral method that is based around the Eigenspectrum of the kernel matrix of the joint samples. It is faster than bootstrapping while being a consistent test. Effectively, the null-distribution of the biased statistic is sampled, but in a more efficient way than the bootstrapping approach. The converges as\n", - "\n", - "$$\n", - "m\\mmd^2_b \\rightarrow \\sum_{l=1}^\\infty \\lambda_l z_l^2\n", - "$$\n", - "\n", - "where $z_l\\sim \\mathcal{N}(0,2)$ are i.i.d. normal samples and $\\lambda_l$ are Eigenvalues of expression 2 in [2], which can be empirically estimated by $\\hat\\lambda_l=\\frac{1}{m}\\nu_l$ where $\\nu_l$ are the Eigenvalues of the centred kernel matrix of the joint samples $X$ and $Y$. The distribution above can be easily sampled. Shogun's implementation has two parameters:\n", - "\n", - " * Number of samples from null-distribution. The more, the more accurate. As a rule of thumb, use 250.\n", - " * Number of Eigenvalues of the Eigen-decomposition of the kernel matrix to use. The more, the better the results get. However, the Eigen-spectrum of the joint gram matrix usually decreases very fast. Plotting the Spectrum can help. See [2] for details.\n", - "\n", - "If the kernel matrices are diagonal dominant, this method is likely to fail. For that and more details, see the original paper. Computational costs are much lower than bootstrapping, which is the only consistent alternative. Since Eigenvalues of the gram matrix has to be computed, costs are in $\\mathcal{O}(m^3)$.\n", - "\n", - "Below, we illustrate how to sample the null distribution and perform two-sample testing with the Spectrum approximation in the class CQuadraticTimeMMD. This method only works with the biased statistic." - ] - }, - { - "cell_type": "code", - "collapsed": false, - "input": [ - "# optional: plot spectrum of joint kernel matrix\n", - "from numpy.linalg import eig\n", - "\n", - "# get joint feature object and compute kernel matrix and its spectrum\n", - "feats_p_q=mmd.get_p_and_q()\n", - "mmd.get_kernel().init(feats_p_q, feats_p_q)\n", - "K=mmd.get_kernel().get_kernel_matrix()\n", - "w,_=eig(K)\n", - "\n", - "# visualise K and its spectrum (only up to threshold)\n", - "figure(figsize=(18,5))\n", - "subplot(121)\n", - "imshow(K, interpolation=\"nearest\")\n", - "title(\"Kernel matrix K of joint data $X$ and $Y$\")\n", - "subplot(122)\n", - "thresh=0.1\n", - "plot(w[:len(w[w>thresh])])\n", - "_=title(\"Eigenspectrum of K until component %d\" % len(w[w>thresh]))" - ], - "language": "python", - "metadata": {}, - "outputs": [] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "The above plot of the Eigenspectrum shows that the Eigenvalues are decaying extremely fast. We choose the number for the approximation such that all Eigenvalues bigger than some threshold are used. In this case, we will not loose a lot of accuracy while gaining a significant speedup. For slower decaying Eigenspectrums, this approximation might be more expensive." - ] - }, - { - "cell_type": "code", - "collapsed": false, - "input": [ - "# threshold for eigenspectrum\n", - "thresh=0.1\n", - "\n", - "# compute number of eigenvalues to use\n", - "num_eigen=len(w[w>thresh])\n", - "\n", - "# finally, do the test, use biased statistic\n", - "mmd.set_statistic_type(BIASED)\n", - "\n", - "#tell Shogun to use spectrum approximation\n", - "mmd.set_null_approximation_method(MMD2_SPECTRUM)\n", - "mmd.set_num_eigenvalues_spectrum(num_eigen)\n", - "mmd.set_num_samples_spectrum(num_samples)\n", - "\n", - "# the usual test interface\n", - "p_value_spectrum=mmd.perform_test()\n", - "print \"Spectrum: P-value of MMD test is %.2f\" % p_value_spectrum\n", - "\n", - "# compare with ground truth bootstrapping\n", - "mmd.set_null_approximation_method(PERMUTATION)\n", - "mmd.set_num_null_samples(num_samples)\n", - "p_value_boot=mmd.perform_test()\n", - "print \"Bootstrapping: P-value of MMD test is %.2f\" % p_value_spectrum" - ], - "language": "python", - "metadata": {}, - "outputs": [] - }, - { - "cell_type": "heading", - "level": 3, - "metadata": {}, - "source": [ - "The Gamma Moment Matching Approximation and Type I errors" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "$\\DeclareMathOperator{\\var}{var}$\n", - "Another method for approximating the null-distribution is by matching the first two moments of a Gamma distribution and then compute the quantiles of that. This does not result in a consistent test, but usually also gives good results while being very fast. However, there are distributions where the method fail. Therefore, the type I error should always be monitored. Described in [2]. It uses\n", - "\n", - "$$\n", - "m\\mmd_b(Z) \\sim \\frac{x^{\\alpha-1}\\exp(-\\frac{x}{\\beta})}{\\beta^\\alpha \\Gamma(\\alpha)}\n", - "$$\n", - "\n", - "where\n", - "\n", - "$$\n", - "\\alpha=\\frac{(\\textbf{E}(\\text{MMD}_b(Z)))^2}{\\var(\\text{MMD}_b(Z))} \\qquad \\text{and} \\qquad\n", - " \\beta=\\frac{m \\var(\\text{MMD}_b(Z))}{(\\textbf{E}(\\text{MMD}_b(Z)))^2}\n", - "$$\n", - "\n", - "Then, any threshold and p-value can be computed using the gamma distribution in the above expression. Computational costs are in $\\mathcal{O}(m^2)$. Note that the test is parameter free. It only works with the biased statistic." - ] - }, - { - "cell_type": "code", - "collapsed": false, - "input": [ - "# tell Shogun to use gamma approximation\n", - "mmd.set_null_approximation_method(MMD2_GAMMA)\n", - "\n", - "# the usual test interface\n", - "p_value_gamma=mmd.perform_test()\n", - "print \"Gamma: P-value of MMD test is %.2f\" % p_value_gamma\n", - "\n", - "# compare with ground truth bootstrapping\n", - "mmd.set_null_approximation_method(PERMUTATION)\n", - "p_value_boot=mmd.perform_test()\n", - "print \"Bootstrapping: P-value of MMD test is %.2f\" % p_value_spectrum" - ], - "language": "python", - "metadata": {}, - "outputs": [] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "As we can see, the above example was kind of unfortunate, as the approximation fails badly. We check the type I error to verify that. This works similar to sampling the alternative distribution: re-sample data (assuming infinite amounts), perform the test and average results. Below we compare type I errors or all methods for approximating the null distribution. This will take a while." - ] - }, - { - "cell_type": "code", - "collapsed": false, - "input": [ - "# type I error is false alarm, therefore sample data under H0\n", - "num_trials=50\n", - "rejections_gamma=zeros(num_trials)\n", - "rejections_spectrum=zeros(num_trials)\n", - "rejections_bootstrap=zeros(num_trials)\n", - "num_samples=50\n", - "alpha=0.05\n", - "for i in range(num_trials):\n", - " X=norm.rvs(size=n, loc=mu, scale=sigma2)\n", - " Y=laplace.rvs(size=n, loc=mu, scale=b)\n", - " \n", - " # simulate H0 via merging samples before computing the \n", - " Z=hstack((X,Y))\n", - " X=Z[:len(X)]\n", - " Y=Z[len(X):]\n", - " feat_p=RealFeatures(reshape(X, (1,len(X))))\n", - " feat_q=RealFeatures(reshape(Y, (1,len(Y))))\n", - " \n", - " # gamma\n", - " mmd=QuadraticTimeMMD(kernel, feat_p, feat_q)\n", - " mmd.set_null_approximation_method(MMD2_GAMMA)\n", - " mmd.set_statistic_type(BIASED)\n", - " rejections_gamma[i]=mmd.perform_test(alpha)\n", - " \n", - " # spectrum\n", - " mmd=QuadraticTimeMMD(kernel, feat_p, feat_q)\n", - " mmd.set_null_approximation_method(MMD2_SPECTRUM)\n", - " mmd.set_num_eigenvalues_spectrum(num_eigen)\n", - " mmd.set_num_samples_spectrum(num_samples)\n", - " mmd.set_statistic_type(BIASED)\n", - " rejections_spectrum[i]=mmd.perform_test(alpha)\n", - " \n", - " # bootstrap (precompute kernel)\n", - " mmd=QuadraticTimeMMD(kernel, feat_p, feat_q)\n", - " p_and_q=mmd.get_p_and_q()\n", - " kernel.init(p_and_q, p_and_q)\n", - " precomputed_kernel=CustomKernel(kernel)\n", - " mmd.set_kernel(precomputed_kernel)\n", - " mmd.set_null_approximation_method(PERMUTATION)\n", - " mmd.set_num_null_samples(num_samples)\n", - " mmd.set_statistic_type(BIASED)\n", - " rejections_bootstrap[i]=mmd.perform_test(alpha)" - ], - "language": "python", - "metadata": {}, - "outputs": [] - }, - { - "cell_type": "code", - "collapsed": false, - "input": [ - "convergence_gamma=cumsum(rejections_gamma)/(arange(num_trials)+1)\n", - "convergence_spectrum=cumsum(rejections_spectrum)/(arange(num_trials)+1)\n", - "convergence_bootstrap=cumsum(rejections_bootstrap)/(arange(num_trials)+1)\n", - "\n", - "print \"Average rejection rate of H0 for Gamma is %.2f\" % mean(convergence_gamma)\n", - "print \"Average rejection rate of H0 for Spectrum is %.2f\" % mean(convergence_spectrum)\n", - "print \"Average rejection rate of H0 for Bootstrapping is %.2f\" % mean(rejections_bootstrap)" - ], - "language": "python", - "metadata": {}, - "outputs": [] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "We see that Gamma basically never rejects, which is inline with the fact that the p-value was massively overestimated above. Note that for the other tests, the p-value is also not at its desired value, but this is due to the low number of samples/repetitions in the above code. Increasing them leads to consistent type I errors." - ] - }, - { - "cell_type": "heading", - "level": 3, - "metadata": {}, - "source": [ - "Linear Time MMD on Gaussian Blobs" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "So far, we basically had to precompute the kernel matrix for reasonable runtimes. This is not possible for more than a few thousand points. The linear time MMD statistic, implemented in CLinearTimeMMD can help here, as it accepts data under the streaming interface CStreamingFeatures, which deliver data one-by-one.\n", - "\n", - "And it can do more cool things, for example choose the best single (or combined) kernel for you. But we need a more fancy dataset for that to show its power. We will use one of Shogun's streaming based data generator, CGaussianBlobsDataGenerator for that. This dataset consists of two distributions which are a grid of Gaussians where in one of them, the Gaussians are stretched and rotated. This dataset is regarded as challenging for two-sample testing." - ] - }, - { - "cell_type": "code", - "collapsed": false, - "input": [ - "# paramters of dataset\n", - "m=20000\n", - "distance=10\n", - "stretch=5\n", - "num_blobs=3\n", - "angle=pi/4\n", - "\n", - "# these are streaming features\n", - "gen_p=GaussianBlobsDataGenerator(num_blobs, distance, 1, 0)\n", - "gen_q=GaussianBlobsDataGenerator(num_blobs, distance, stretch, angle)\n", - "\t\t\n", - "# stream some data and plot\n", - "num_plot=1000\n", - "features=gen_p.get_streamed_features(num_plot)\n", - "features=features.create_merged_copy(gen_q.get_streamed_features(num_plot))\n", - "data=features.get_feature_matrix()\n", - "\n", - "figure(figsize=(18,5))\n", - "subplot(121)\n", - "grid(True)\n", - "plot(data[0][0:num_plot], data[1][0:num_plot], 'r.', label='$x$')\n", - "title('$X\\sim p$')\n", - "subplot(122)\n", - "grid(True)\n", - "plot(data[0][num_plot+1:2*num_plot], data[1][num_plot+1:2*num_plot], 'b.', label='$x$', alpha=0.5)\n", - "_=title('$Y\\sim q$')" - ], - "language": "python", - "metadata": {}, - "outputs": [] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "We now describe the linear time MMD, as described in [1, Section 6], which is implemented in Shogun. A fast, unbiased estimate for the original MMD expression which still uses all available data can be obtained by dividing data into two parts and then compute\n", - "\n", - "$$\n", - "\\mmd_l^2[\\mathcal{F},X,Y]=\\frac{1}{m_2}\\sum_{i=1}^{m_2} k(x_{2i},x_{2i+1})+k(y_{2i},y_{2i+1})-k(x_{2i},y_{2i+1})-\n", - " k(x_{2i+1},y_{2i})\n", - "$$\n", - "\n", - "where $ m_2=\\lfloor\\frac{m}{2} \\rfloor$. While the above expression assumes that $m$ data are available from each distribution, the statistic in general works in an online setting where features are obtained one by one. Since only pairs of four points are considered at once, this allows to compute it on data streams. In addition, the computational costs are linear in the number of samples that are considered from each distribution. These two properties make the linear time MMD very applicable for large scale two-sample tests. In theory, any number of samples can be processed -- time is the only limiting factor.\n", - "\n", - "We begin by illustrating how to pass data to CLinearTimeMMD. In order not to loose performance due to overhead, it is possible to specify a block size for the data stream." - ] - }, - { - "cell_type": "code", - "collapsed": false, - "input": [ - "block_size=100\n", - "\n", - "# if features are already under the streaming interface, just pass them\n", - "mmd=LinearTimeMMD(kernel, gen_p, gen_q, m, block_size)\n", - "\n", - "# compute an unbiased estimate in linear time\n", - "statistic=mmd.compute_statistic()\n", - "print \"MMD_l[X,Y]^2=%.2f\" % statistic\n", - "\n", - "# note: due to the streaming nature, successive calls of compute statistic use different data\n", - "# and produce different results. Data cannot be stored in memory\n", - "for _ in range(5):\n", - " print \"MMD_l[X,Y]^2=%.2f\" % mmd.compute_statistic()" - ], - "language": "python", - "metadata": {}, - "outputs": [] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Sometimes, one might want to use CLinearTimeMMD with data that is stored in memory. In that case, it is easy to data in the form of for example CStreamingDenseFeatures into CDenseFeatures." - ] - }, - { - "cell_type": "code", - "collapsed": false, - "input": [ - "# data source\n", - "gen_p=GaussianBlobsDataGenerator(num_blobs, distance, 1, 0)\n", - "gen_q=GaussianBlobsDataGenerator(num_blobs, distance, stretch, angle)\n", - "\n", - "# retreive some points, store them as non-streaming data in memory\n", - "data_p=gen_p.get_streamed_features(100)\n", - "data_q=gen_q.get_streamed_features(data_p.get_num_vectors())\n", - "print \"Number of data is %d\" % data_p.get_num_vectors()\n", - "\n", - "# cast data in memory as streaming features again (which now stream from the in-memory data)\n", - "streaming_p=StreamingRealFeatures(data_p)\n", - "streaming_q=StreamingRealFeatures(data_q)\n", - "\n", - "# it is important to start the internal parser to avoid deadlocks\n", - "streaming_p.start_parser()\n", - "streaming_q.start_parser()\n", - "\n", - "# example to create mmd (note that m can be maximum the number of data in memory)\n", - "\n", - "mmd=LinearTimeMMD(GaussianKernel(10,1), streaming_p, streaming_q, data_p.get_num_vectors(), 1)\n", - "print \"Linear time MMD statistic: %.2f\" % mmd.compute_statistic()" - ], - "language": "python", - "metadata": {}, - "outputs": [] - }, - { - "cell_type": "heading", - "level": 4, - "metadata": {}, - "source": [ - "The Gaussian Approximation to the Null Distribution" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "As for any two-sample test in Shogun, bootstrapping can be used to approximate the null distribution. This results in a consistent, but slow test. The number of samples to take is the only parameter. Note that since CLinearTimeMMD operates on streaming features, *new* data is taken from the stream in every iteration.\n", - "\n", - "Bootstrapping is not really necessary since there exists a fast and consistent estimate of the null-distribution. However, to ensure that any approximation is accurate, it should always be checked against bootstrapping at least once.\n", - "\n", - "Since both the null- and the alternative distribution of the linear time MMD are Gaussian with equal variance (and different mean), it is possible to approximate the null-distribution by using a linear time estimate for this variance. An unbiased, linear time estimator for\n", - "\n", - "$$\n", - "\\var[\\mmd_l^2[\\mathcal{F},X,Y]]\n", - "$$\n", - "\n", - "can simply be computed by computing the empirical variance of\n", - "\n", - "$$\n", - "k(x_{2i},x_{2i+1})+k(y_{2i},y_{2i+1})-k(x_{2i},y_{2i+1})-k(x_{2i+1},y_{2i}) \\qquad (1\\leq i\\leq m_2)\n", - "$$\n", - "\n", - "A normal distribution with this variance and zero mean can then be used as an approximation for the null-distribution. This results in a consistent test and is very fast. However, note that it is an approximation and its accuracy depends on the underlying data distributions. It is a good idea to compare to the bootstrapping approach first to determine an appropriate number of samples to use. This number is usually in the tens of thousands.\n", - "\n", - "CLinearTimeMMD allows to approximate the null distribution in the same pass as computing the statistic itself (in linear time). This should always be used in practice since seperate calls of computing statistic and p-value will operator on different data from the stream. Below, we compute the test on a large amount of data (impossible to perform quadratic time MMD for this one as the kernel matrices cannot be stored in memory)" - ] - }, - { - "cell_type": "code", - "collapsed": false, - "input": [ - "mmd=LinearTimeMMD(kernel, gen_p, gen_q, m, block_size)\n", - "print \"m=%d samples from p and q\" % m\n", - "print \"Binary test result is: \" + (\"Rejection\" if mmd.perform_test(alpha) else \"No rejection\")\n", - "print \"P-value test result is %.2f\" % mmd.perform_test()" - ], - "language": "python", - "metadata": {}, - "outputs": [] - }, - { - "cell_type": "heading", - "level": 3, - "metadata": {}, - "source": [ - "Kernel Selection for the MMD -- Overview" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "$\\DeclareMathOperator*{\\argmin}{arg\\,min}\n", - "\\DeclareMathOperator*{\\argmax}{arg\\,max}$\n", - "Now which kernel do we actually use for our tests? So far, we just plugged in arbritary ones. However, for kernel two-sample testing, it is possible to do something more clever.\n", - "\n", - "Shogun's kernel selection methods for MMD based two-sample tests are all based around [3, 4]. For the CLinearTimeMMD, [3] describes a way of selecting the *optimal* kernel in the sense that the test's type II error is minimised. For the linear time MMD, this is the method of choice. It is done via maximising the MMD statistic divided by its standard deviation and it is possible for single kernels and also for convex combinations of them. For the CQuadraticTimeMMD, the best method in literature is choosing the kernel that maximised the MMD statistic [4]. For convex combinations of kernels, this can be achieved via a $L2$ norm constraint. A detailed comparison of all methods on numerous datasets can be found in [5].\n", - "\n", - "MMD Kernel selection in Shogun always involves an implementation of the base class CMMDKernelSelection, which defines the interface for kernel selection. If combinations of kernel should be considered, there is a sub-class CMMDKernelSelectionComb. In addition, it involves setting up a number of baseline kernels $\\mathcal{K}$ to choose from/combine in the form of a CCombinedKernel. All methods compute their results for a fixed set of these baseline kernels. We later give an example how to use these classes after providing a list of available methods.\n", - "\n", - " * CMMDKernelSelectionMedian Selects from a set CGaussianKernel instances the one whose width parameter is closest to the median of the pairwise distances in the data. The median is computed on a certain number of points from each distribution that can be specified as a parameter. Since the median is a stable statistic, one does not have to compute all pairwise distances but rather just a few thousands. This method a useful (and fast) heuristic that in many cases gives a good hint on where to start looking for Gaussian kernel widths. It is for example described in [1]. Note that it may fail badly in selecting a good kernel for certain problems.\n", - "\n", - " * CMMDKernelSelectionMax Selects from a set of arbitrary baseline kernels a single one that maximises the used MMD statistic -- more specific its estimate.\n", - "$$\n", - "k^*=\\argmax_{k\\in\\mathcal{K}} \\hat \\eta_k,\n", - "$$\n", - "where $\\eta_k$ is an empirical MMD estimate for using a kernel $k$.\n", - "This was first described in [4] and was empirically shown to perform better than the median heuristic above. However, it remains a heuristic that comes with no guarantees. Since MMD estimates can be computed in linear and quadratic time, this method works for both methods. However, for the linear time statistic, there exists a better method.\n", - " \n", - " * CMMDKernelSelectionOpt Selects the optimal single kernel from a set of baseline kernels. This is done via maximising the ratio of the linear MMD statistic and its standard deviation.\n", - "$$\n", - "k^*=\\argmax_{k\\in\\mathcal{K}} \\frac{\\hat \\eta_k}{\\hat\\sigma_k+\\lambda},\n", - "$$\n", - "where $\\eta_k$ is a linear time MMD estimate for using a kernel $k$ and $\\hat\\sigma_k$ is a linear time variance estimate of $\\eta_k$ to which a small number $\\lambda$ is added to prevent division by zero.\n", - "These are estimated in a linear time way with the streaming framework that was described earlier. Therefore, this method is only available for CLinearTimeMMD. Optimal here means that the resulting test's type II error is minimised for a fixed type I error. *Important:* For this method to work, the kernel needs to be selected on *different* data than the test is performed on. Otherwise, the method will produce wrong results.\n", - " \n", - " * CMMDKernelSelectionCombMaxL2 Selects a convex combination of kernels that maximises the MMD statistic. This is the multiple kernel analogous to CMMDKernelSelectionMax. This is done via solving the convex program\n", - "$$\n", - "\\boldsymbol{\\beta}^*=\\min_{\\boldsymbol{\\beta}} \\{\\boldsymbol{\\beta}^T\\boldsymbol{\\beta} : \\boldsymbol{\\beta}^T\\boldsymbol{\\eta}=\\mathbf{1}, \\boldsymbol{\\beta}\\succeq 0\\},\n", - "$$\n", - "where $\\boldsymbol{\\beta}$ is a vector of the resulting kernel weights and $\\boldsymbol{\\eta}$ is a vector of which each component contains a MMD estimate for a baseline kernel. See [3] for details. Note that this method is unable to select a single kernel -- even when this would be optimal.\n", - "Again, when using the linear time MMD, there are better methods available.\n", - "\n", - " * CMMDKernelSelectionCombOpt Selects a convex combination of kernels that maximises the MMD statistic divided by its covariance. This corresponds to \\emph{optimal} kernel selection in the same sense as in class CMMDKernelSelectionOpt and is its multiple kernel analogous. The convex program to solve is\n", - "$$\n", - "\\boldsymbol{\\beta}^*=\\min_{\\boldsymbol{\\beta}} (\\hat Q+\\lambda I) \\{\\boldsymbol{\\beta}^T\\boldsymbol{\\beta} : \\boldsymbol{\\beta}^T\\boldsymbol{\\eta}=\\mathbf{1}, \\boldsymbol{\\beta}\\succeq 0\\},\n", - "$$\n", - "where again $\\boldsymbol{\\beta}$ is a vector of the resulting kernel weights and $\\boldsymbol{\\eta}$ is a vector of which each component contains a MMD estimate for a baseline kernel. The matrix $\\hat Q$ is a linear time estimate of the covariance matrix of the vector $\\boldsymbol{\\eta}$ to whose diagonal a small number $\\lambda$ is added to prevent division by zero. See [3] for details. In contrast to CMMDKernelSelectionCombMaxL2, this method is able to select a single kernel when this gives a lower type II error than a combination. In this sense, it contains CMMDKernelSelectionOpt." - ] - }, - { - "cell_type": "heading", - "level": 4, - "metadata": {}, - "source": [ - "MMD Kernel Selection in Shogun" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "In order to use one of the above methods for kernel selection, one has to create a new instance of CCombinedKernel append all desired baseline kernels to it. This combined kernel is then passed to the MMD class. Then, an object of any of the above kernel selection methods is created and the MMD instance is passed to it in the constructor. There are then multiple methods to call\n", - "\n", - " * *compute_measures* to compute a vector kernel selection criteria if a single kernel selection method is used. It will return a vector of selected kernel weights if a combined kernel selection method is used. For \\shogunclass{CMMDKernelSelectionMedian}, the method does throw an error.\n", - "\n", - " * *select\\_kernel* returns the selected kernel of the method. For single kernels this will be one of the baseline kernel instances. For the combined kernel case, this will be the underlying CCombinedKernel instance where the subkernel weights are set to the weights that were selected by the method. \n", - "\n", - "In order to utilise the selected kernel, it has to be passed to an MMD instance. We now give an example how to select the optimal single and combined kernel for the Gaussian Blobs dataset." - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "What is the best kernel to use here? This is tricky since the distinguishing characteristics are hidden at a small length-scale. Create some kernels to select the best from" - ] - }, - { - "cell_type": "code", - "collapsed": false, - "input": [ - "sigmas=[2**x for x in linspace(-5,5, 10)]\n", - "print \"Choosing kernel width from\", [\"{0:.2f}\".format(sigma) for sigma in sigmas]\n", - "combined=CombinedKernel()\n", - "for i in range(len(sigmas)):\n", - " combined.append_kernel(GaussianKernel(10, sigmas[i]))\n", - "\n", - "# mmd instance using streaming features\n", - "block_size=1000\n", - "mmd=LinearTimeMMD(combined, gen_p, gen_q, m, block_size)\n", - "\n", - "# optmal kernel choice is possible for linear time MMD\n", - "selection=MMDKernelSelectionOpt(mmd)\n", - "\n", - "# select best kernel\n", - "best_kernel=selection.select_kernel()\n", - "best_kernel=GaussianKernel.obtain_from_generic(best_kernel)\n", - "print \"Best single kernel has bandwidth %.2f\" % best_kernel.get_width()" - ], - "language": "python", - "metadata": {}, - "outputs": [] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Now perform two-sample test with that kernel" - ] - }, - { - "cell_type": "code", - "collapsed": false, - "input": [ - "alpha=0.05\n", - "mmd=LinearTimeMMD(best_kernel, gen_p, gen_q, m, block_size)\n", - "mmd.set_null_approximation_method(MMD1_GAUSSIAN);\n", - "p_value_best=mmd.perform_test();\n", - "\n", - "print \"Bootstrapping: P-value of MMD test with optimal kernel is %.2f\" % p_value_best" - ], - "language": "python", - "metadata": {}, - "outputs": [] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "For the linear time MMD, the null and alternative distributions look different than for the quadratic time MMD as plotted above. Let's sample them (takes longer, reduce number of samples a bit). Note how we can tell the linear time MMD to smulate the null hypothesis, which is necessary since we cannot permute by hand as samples are not in memory)" - ] - }, - { - "cell_type": "code", - "collapsed": false, - "input": [ - "mmd=LinearTimeMMD(best_kernel, gen_p, gen_q, 5000, block_size)\n", - "num_samples=500\n", - "\n", - "# sample null and alternative distribution, implicitly generate new data for that\n", - "null_samples=zeros(num_samples)\n", - "alt_samples=zeros(num_samples)\n", - "for i in range(num_samples):\n", - " alt_samples[i]=mmd.compute_statistic()\n", - " \n", - " # tell MMD to merge data internally while streaming\n", - " mmd.set_simulate_h0(True)\n", - " null_samples[i]=mmd.compute_statistic()\n", - " mmd.set_simulate_h0(False)" - ], - "language": "python", - "metadata": {}, - "outputs": [] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "And visualise again. Note that both null and alternative distribution are Gaussian, which allows the fast null distribution approximation and the optimal kernel selection" - ] - }, - { - "cell_type": "code", - "collapsed": false, - "input": [ - "plot_alt_vs_null(alt_samples, null_samples, alpha)" - ], - "language": "python", - "metadata": {}, - "outputs": [] - }, - { - "cell_type": "heading", - "level": 2, - "metadata": {}, - "source": [ - "Soon to come:" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - " * Two-sample tests on strings\n", - " * Two-sample tests on audio data (quite fun)\n", - " * Testing for independence with the Hilbert Schmidt Independence Criterion" - ] - }, - { - "cell_type": "heading", - "level": 2, - "metadata": {}, - "source": [ - "References" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "[1]: Gretton, A., Borgwardt, K. M., Rasch, M. J., Sch\u00f6lkopf, B., & Smola, A. (2012). A Kernel Two-Sample Test. Journal of Machine Learning Research, 13, 671\u2013721.\n", - "\n", - "[2]: Gretton, A., Fukumizu, K., Harchaoui, Z., & Sriperumbudur, B. K. (2012). A fast, consistent kernel two-sample test. In Advances in Neural Information Processing Systems (pp. 673\u2013681).\n", - "\n", - "[3]: Gretton, A., Sriperumbudur, B., Sejdinovic, D., Strathmann, H., Balakrishnan, S., Pontil, M., & Fukumizu, K. (2012). Optimal kernel choice for large-scale two-sample tests. In Advances in Neural Information Processing Systems.\n", - "\n", - "[4]: Sriperumbudur, B., Fukumizu, K., Gretton, A., Lanckriet, G. R. G., & Sch\u00f6lkopf, B. (2009). Kernel choice and classifiability for RKHS embeddings of probability distributions. In Advances in Neural Information Processing Systems\n", - "\n", - "[5]: Strathmann, H. (2012). M.Sc. Adaptive Large-Scale Kernel Two-Sample Testing. University College London." - ] - }, - { - "cell_type": "code", - "collapsed": false, - "input": [], - "language": "python", - "metadata": {}, - "outputs": [] - } - ], - "metadata": {} + "kernelspec": { + "display_name": "Python 2", + "language": "python", + "name": "python2" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 2 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython2", + "version": "2.7.6" } - ] + }, + "nbformat": 4, + "nbformat_minor": 0 }