You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I think we should have scoring=None for all Gradient Boosting estimators by default and still enable early stopping.
if self.validation_split is not None, call self.loss_(y_validation, raw_predictions_validation) instead of a Scorer instance.
if self.validation_split is None, find a way to incrementally update an estimator of the full training loss without adding any additional computation cost (I think it should be doable but I have not checked in detail).
if n_iter_no_change is None or n_iter_no_change == 0 disable early stopping and go to max_iter.
For the second point, you mean updating the training loss while we update the gradients to avoid redundant computations? This seems doable for LS at least.
I'll go for a simple version first (separate updates) and see what it looks like.
For the second point, you mean updating the training loss while we update the gradients to avoid redundant computations? This seems doable for LS at least.
Yes.
I'll go for a simple version first (separate updates) and see what it looks like.
I think we should have
scoring=None
for all Gradient Boosting estimators by default and still enable early stopping.self.validation_split is not None
, callself.loss_(y_validation, raw_predictions_validation)
instead of aScorer
instance.self.validation_split is None
, find a way to incrementally update an estimator of the full training loss without adding any additional computation cost (I think it should be doable but I have not checked in detail).n_iter_no_change is None or n_iter_no_change == 0
disable early stopping and go tomax_iter
.@NicolasHug WDYT?
The text was updated successfully, but these errors were encountered: