diff --git a/notebooks/metrics_sol_02.ipynb b/notebooks/metrics_sol_02.ipynb index 2efef3f68..63b3cb5ca 100644 --- a/notebooks/metrics_sol_02.ipynb +++ b/notebooks/metrics_sol_02.ipynb @@ -214,8 +214,8 @@ }, "source": [ "Even if the score distributions overlap due to the presence of outliers in the\n", - "dataset, it is true that the average MSE is lower when `loss=\"squared_error`,\n", - "whereas the average MAE is lower when `loss=\"absolute_error` as expected.\n", + "dataset, it is true that the average MSE is lower when `loss=\"squared_error\"`,\n", + "whereas the average MAE is lower when `loss=\"absolute_error\"` as expected.\n", "Indeed, the choice of a loss function is made depending on the evaluation\n", "metric that we want to optimize for a given use case.\n", "\n", diff --git a/notebooks/parameter_tuning_nested.ipynb b/notebooks/parameter_tuning_nested.ipynb index f632d16f4..6fe297cdb 100644 --- a/notebooks/parameter_tuning_nested.ipynb +++ b/notebooks/parameter_tuning_nested.ipynb @@ -354,12 +354,12 @@ "

Note

\n", "

This figure illustrates the nested cross-validation strategy using\n", "cv_inner = KFold(n_splits=4) and cv_outer = KFold(n_splits=5).

\n", - "

For each inner cross-validation split (indexed on the left-hand side),\n", + "

For each inner cross-validation split (indexed on the right-hand side),\n", "the procedure trains a model on all the red samples and evaluate the quality\n", "of the hyperparameters on the green samples.

\n", - "

For each outer cross-validation split (indexed on the right-hand side),\n", + "

For each outer cross-validation split (indexed on the left-hand side),\n", "the best hyper-parameters are selected based on the validation scores\n", - "(computed on the greed samples) and a model is refitted on the concatenation\n", + "(computed on the green samples) and a model is refitted on the concatenation\n", "of the red and green samples for that outer CV iteration.

\n", "

The generalization performance of the 5 refitted models from the outer CV\n", "loop are then evaluated on the blue samples to get the final scores.

\n",