From ed02a187249e42e72d8ca0e868bfab2b4142361a Mon Sep 17 00:00:00 2001 From: Brian Bowman Date: Mon, 16 May 2016 01:40:32 -0500 Subject: [PATCH] fix typo --- classification.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/classification.md b/classification.md index 83f9099f..b14bb2e4 100644 --- a/classification.md +++ b/classification.md @@ -225,7 +225,7 @@ In cases where the size of your training data (and therefore also the validation
-
Common data splits. A training and test set is given. The training set is split into folds (for example 5 folds here). The folds 1-4 become the training set. One fold (e.g. fold 5 here in yellow) is denoted as the Validation fold and is used to tune the hyperparameters. Cross-validation goes a step further iterates over the choice of which fold is the validation fold, separately from 1-5. This would be referred to as 5-fold cross-validation. In the very end once the model is trained and all the best hyperparameters were determined, the model is evaluated a single time on the test data (red).
+
Common data splits. A training and test set is given. The training set is split into folds (for example 5 folds here). The folds 1-4 become the training set. One fold (e.g. fold 5 here in yellow) is denoted as the Validation fold and is used to tune the hyperparameters. Cross-validation goes a step further and iterates over the choice of which fold is the validation fold, separately from 1-5. This would be referred to as 5-fold cross-validation. In the very end once the model is trained and all the best hyperparameters were determined, the model is evaluated a single time on the test data (red).