Skip to content

validation

Manlio Morini edited this page Apr 27, 2019 · 21 revisions

When evaluating machine learning models, the validation step helps you find the best parameters for your model while also preventing it from becoming overfitted.

Vita supports the hold-out strategy (aka one round cross-validation or conventional validation) and DSS (that is not just/exactly a validation strategy).

General approach (working with search class)

This quite low level (but very flexible). You have to:

  1. set the evaluator (a.k.a fitness) function via the search::training_evaluator member function;
  2. possibly set the validation function via the search::validation_evaluator member function (usually this is the same function used for training). If you don't specify a validation function no validation is performed even if a validation strategy is specified (see point 3);
  3. possibly set the specific validation strategy via the search::validation_strategy member function. If you don't specify a validation strategy, the default behavior is as_is_validation: use the validation function without changes to the user-preset validation set.

Symbolic regression / classification tasks (working with the src_search class)

The src_search class offers a simpler interface but can only be used for symbolic regression / classification tasks. You have to:

  1. choose one of the available evaluator (binary_evaluator, dyn_slot_evaluator, gaussian_evaluator) via the src_search::evaluator member function (which, internally, calls search::training_evaluator and search::validation_evaluator tied to the appropriate datasets);
  2. possibly set the specific validation strategy via the src_search::validation_strategy member function.

Validation strategies

Hold-out validation

We randomly assign data points to two sets, usually called the training set and the validation set. The size of each of the sets is arbitrary although typically the validation set is smaller than the training set.

We then train (build a model) on the first set and validate on the second one.

When performing R evolutionary runs, the same training set is used for each run obtaining R solutions (one solution per run). The validation dataset is used to compare their performances and decide which one to choose.

DSS

The algorithm involves randomly selecting a target number of examples from the available dataset with a bias.

Periodically (usually every generation) selected examples are assigned to the training set, so that an example is more likely to be selected if it's difficult or hasn't been selected for several generations.

The current generation of the population is then evaluated against this subset instead of the entire set.

DSS performs very well on a large dataset. Benefits include:

  • much less CPU time;
  • much smaller population and thus less memory needed;
  • better generalization;
  • greater pressure to find optimal solutions.

Default behavior

Following the principle of less astonishment, the framework doesn't automatically perform validation unless explicitly instructed to do so.

More to the point, the default validation strategy (as_is_validation) keeps unchanged both the training and the validation set. So if the validation set is:

  • empty, no validation is performed;
  • not empty, the validation is performed using the fixed preset examples. The holdout_validation is somewhat similar since it automatically samples the training set to fill the (fixed) validation set.