Skip to content

Commit

Permalink
Merge pull request #1796 from karlnapf/develop
Browse files Browse the repository at this point in the history
added FITC part of GP notebook and X-validation notebook draft
  • Loading branch information
karlnapf committed Dec 17, 2013
2 parents 7230f07 + 65a66d7 commit 70f2657
Show file tree
Hide file tree
Showing 2 changed files with 456 additions and 1 deletion.
333 changes: 333 additions & 0 deletions doc/ipython-notebooks/evaluation/xval_modelselection.ipynb
Original file line number Diff line number Diff line change
@@ -0,0 +1,333 @@
{
"metadata": {
"name": ""
},
"nbformat": 3,
"nbformat_minor": 0,
"worksheets": [
{
"cells": [
{
"cell_type": "heading",
"level": 1,
"metadata": {},
"source": [
"Evaluation, Cross-Validation, and Model Selection"
]
},
{
"cell_type": "heading",
"level": 4,
"metadata": {},
"source": [
"By Heiko Strathmann - <a href=\"mailto:heiko.strathmann@gmail.com\">heiko.strathmann@gmail.com</a> - <a href=\"github.com/karlnapf\">github.com/karlnapf</a> - <a href=\"herrstrathmann.de\">herrstrathmann.de</a>.\n",
"\n",
"\n",
"Based on the model selection framework of his <a href=\"http://www.google-melange.com/gsoc/project/google/gsoc2011/XXX\">Google summer of code 2011 project</a>"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This notebook illustrates how to evaluate prediction algorithms in Shogus using cross-validation, and how to select their parameters using grid-search. Cross-validation estimates the expected value of a chosen loss function (for example testing error) via dividing data into disjoint partitions and then running the algorithms on a number of combinations for training and testing. Grid-search is a way to compare a number of registered parameters using cross-validation. We demonstrate this on a number of different algorithms within Shogun."
]
},
{
"cell_type": "heading",
"level": 2,
"metadata": {},
"source": [
"General Idea"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Cross validation is aims to estimate an algorithms performance on unseen data. For example, one might be interested in the average classification accuracy of a Support Vector Machine when being applied to new data, that it was not trained on. This is important in order to compare the performance different algorithms on the same target. Most crucial is the point that the data that was used for running/training the algorithm is not used for testing. Different algorithms here also can mean different parameters of the same algorithm. Thus, cross-validation can be used to tune parameters of learning algorithms, as well as comparing different families of algorithms against each other. Cross-validation estimates are related to the marginal likelihood in Bayesian statistics in the sense that using them for selecting models avoids overfitting.\n",
"\n",
"Formally, this is achieved via partitioning a dataset $X$ of size $|X|=n$ into $k<n$ disjoint partitions $X_i\\subseteq X$ such that $X_1 \\cup X_2 \\cup \\dots \\cup X_n$ such that $X_i\\cap X_j=\\emptyset$ for all $i\\neq j$. Then, the algorithm is executed on all $k$ possibilities of merging $k-1$ partitions and subsequently tested on the remaining partition. This results in $k$ performances which are evaluated in some metric of choice (Shogun support multiple ones). The procedure can be repeated (on different splits) in order to obtain less variance in the estimate. See TODO for more information, and [1] for a nice review on cross-validation using different performance measures.\n",
"\n",
"There are various strategies to generate splits for cross-validation. We start by a very simply illustrational case."
]
},
{
"cell_type": "heading",
"level": 2,
"metadata": {},
"source": [
"Toy example: Binary Support Vector Classification"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Following the example from above, we will tune the performance of a SVM on a binary classification problem. We will\n",
"\n",
"* demonstrate how to evaluate a loss function or metric on a given algorithm\n",
"* then learn how to estimate this metric for the algorithm performing on unseen data\n",
"* and finally use those techniques to tune the parameters to obtain the best possible results.\n",
"\n",
"The involved methods are\n",
"\n",
" * LibSVM (TODO link) as the binary classification algorithms\n",
" * the area under the ROC curve (AUC) as performance metric\n",
" * three different kernels to compare"
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"# include all Shogun classes\n",
"from modshogun import *"
],
"language": "python",
"metadata": {},
"outputs": []
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"# generate some ultra easy training data\n",
"n=10\n",
"X=hstack((randn(2,n), randn(2,n)+1))\n",
"y=hstack((-ones(n), ones(n)))\n",
"#plot(X[0,y==-1],X[1,y==-1], 'bo')\n",
"#plot(X[0,y==1],X[1,y==1], 'ro')\n",
"#_=legend([\"Class 1\", \"Class 2\"])"
],
"language": "python",
"metadata": {},
"outputs": []
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"# training data in Shogun representation\n",
"features=RealFeatures(X)\n",
"labels=BinaryLabels(y)\n",
"\n",
"# define SVM with a small rbf kernel (always normalise the kernel!)\n",
"C=1\n",
"kernel=GaussianKernel(2, 0.001)\n",
"kernel.init(features, features)\n",
"kernel.set_normalizer(SqrtDiagKernelNormalizer())\n",
"classifier=LibSVM(C, kernel, labels)\n",
"\n",
"# train\n",
"_=classifier.train()"
],
"language": "python",
"metadata": {},
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Ok, we now have performed classification on the training data. How good did this work? We can easily do this for many different performance measures."
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"# instanciate a number of Shogun performance measures\n",
"metrics=[ROCEvaluation(), AccuracyMeasure(), ErrorRateMeasure(), F1Measure(), PrecisionMeasure(), RecallMeasure(), SpecificityMeasure()]\n",
"\n",
"for metric in metrics:\n",
" print metric.get_name(), metric.evaluate(classifier.apply(features), labels)"
],
"language": "python",
"metadata": {},
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Note how for example error rate is 1-accuracy. All of those numbers represent the training error, i.e. the ability of the classifier to explain the given data."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now, the training error is zero. This seems good at first. But is this setting of the parameters a good idea? No! A good performance on the training data alone does not mean anything. A simple look up table is able to produce zero error on training data. What we want is that our methods generalises the input data somehow to perform well on unseen data. We will now use cross-validation to estimate the performance on such.\n",
"\n",
"Cross-validation is based upon splitting the data into multiple partitions. Shogun has various strategies for this. On classificaiton data, the best choice is stratified cross-validation (TODO link). This divided the data in such way that the fraction of labels in each partition is roughly the same, which reduces the variance of the performance estimate quite a bit, in particular for data with more than two classes.\n",
"\n",
"In Shogun this is realised via instance of the abstract base class [CSplittingStrategy](http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CSplittingStrategy.html). We will use [CStratifiedCrossValidationSplitting](http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CStratifiedCrossValidationSplitting.html), which accepts a reference to the labels and the number of partitions as parameters. This instance is then passed to the class [CCrossValidation](http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CCrossValidation.html), which does the estimation using the desired splitting strategy. The latter class can take all algorithms that are implemented against the [CMachine](http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CMachine.html) interface."
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"k=7\n",
"split=StratifiedCrossValidationSplitting(labels, k)\n",
"metric=AccuracyMeasure()\n",
"cross=CrossValidation(classifier, features, labels, split, metric)\n",
"\n",
"# perform the cross-validation, note that this call involved a lot of computation\n",
"result=cross.evaluate()\n",
"\n",
"# the result needs to be casted to CrossValidationResult\n",
"result=CrossValidationResult.obtain_from_generic(result)\n",
"\n",
"# this class contains a field \"mean\" which contain the mean performance metric\n",
"print \"Testing\", metric.get_name(), result.mean"
],
"language": "python",
"metadata": {},
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now this is incredibly bad compared to the training error. In fact, it is very close to random performance (0.5). The lesson: Never judge your algorithms based on the performance on training data!\n",
"\n",
"Note that for small data sizes, the cross-validation estimates are quite noisy. If we run it multiple times, we get different results."
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"print \"Testing\", metric.get_name(), [CrossValidationResult.obtain_from_generic(cross.evaluate()).mean for _ in range(10)]"
],
"language": "python",
"metadata": {},
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"It is better to average a number of different runs of cross-validation in this case. A nice side effect of this is that the results can be used to estimate error intervals for a given confidence rate."
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"# 20 runs and 95% confidence intervals\n",
"cross.set_num_runs(25)\n",
"cross.set_conf_int_alpha(0.05)\n",
"\n",
"# perform x-validation (now even more expensive)\n",
"cross.evaluate()\n",
"result=cross.evaluate()\n",
"result=CrossValidationResult.obtain_from_generic(result)\n",
"\n",
"print \"Testing is in [%.2f, %.2f] with mean %.2f with %.0f%% confidence\" \\\n",
"%(result.conf_int_low, result.conf_int_up, result.mean, (1-result.conf_int_alpha)*100)"
],
"language": "python",
"metadata": {},
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Using this machinery, it is very easy to compare multiple kernel parameters against each other to find the best one. It is even possible to compare a different kernel."
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [
"widths=2**linspace(-5,25,10)\n",
"results=zeros(len(widths))\n",
"upper=zeros(len(widths))\n",
"lower=zeros(len(widths))\n",
"\n",
"for i in range(len(results)):\n",
" kernel.set_width(widths[i])\n",
" result=CrossValidationResult.obtain_from_generic(cross.evaluate())\n",
" results[i]=result.mean\n",
" upper[i]=result.conf_int_up\n",
" lower[i]=result.conf_int_low\n",
" \n",
"plot(log2(widths), results, 'blue')\n",
"fill_between(log2(widths), lower, upper, color='blue', alpha=0.3)\n",
"xlabel(\"log2 Kernel width\")\n",
"ylabel(metric.get_name())\n",
"_=title(\"Accuracy for different kernel widths\")\n",
"\n",
"print \"Best Gaussian kernel width %.2f\" % widths[results.argmax()], \"gives\", results.max()\n",
"\n",
"# compare this with a linear kernel\n",
"classifier.set_kernel(LinearKernel())\n",
"lin_k=CrossValidationResult.obtain_from_generic(cross.evaluate())\n",
"plot([log2(widths[0]), log2(widths[len(widths)-1])], [lin_k.mean,lin_k.mean], 'r')\n",
"\n",
"# please excuse this horrible code :)\n",
"fill_between([log2(widths[0]), log2(widths[len(widths)-1])], [lin_k.conf_int_low,lin_k.conf_int_low],\\\n",
" [lin_k.conf_int_up, lin_k.conf_int_up], color=\"red\", alpha=0.3)\n",
"print \"Linear kernel gives\", lin_k\n",
"\n",
"_=legend([\"Gaussian\", \"Linear\"], loc=\"lower center\")"
],
"language": "python",
"metadata": {},
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This gives a brute-force way to select paramters of any algorithm implemented under the [CMachine](http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CMachine.html) interface. The cool thing about this is, that it is also possible to compare different model families against each other. Below, we compare a a number of binary classifiers in Shogun."
]
},
{
"cell_type": "heading",
"level": 2,
"metadata": {},
"source": [
"References"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"[1] Forman, G. and Scholz, M. (2009). Apples-to-apples in cross-validation studies: Pitfalls in classifier performance measurement. Technical report, HP Laboratories."
]
},
{
"cell_type": "heading",
"level": 2,
"metadata": {},
"source": [
"Coming Soon"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
" * Comparing different families of learning algorithms using cross-validation\n",
" * Model selection via grid-search and more advanced search techniques"
]
},
{
"cell_type": "code",
"collapsed": false,
"input": [],
"language": "python",
"metadata": {},
"outputs": []
}
],
"metadata": {}
}
]
}
Loading

0 comments on commit 70f2657

Please sign in to comment.