`MLPRegressor` quits fitting too soon due to `self._no_improvement_count` #9456
Comments
I have observed this, too. Can we maybe have a reference on what a good default would be from the literature, or is it too batch-size dependent? What does Keras do? |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Description
MLPRegressor
quits fitting too soon due toself._no_improvement_count
.self._no_improvement_count
has a magic number limit of 2._update_no_improvement_count()
usesself.best_loss_
to check if no improvement has occured.batch_size
tuning can improve loss curve fluctuations, but that is outside the scope of this issue.Steps/Code to Reproduce
self.best_loss_
.self._no_improvement_count > 2
.Expected Results
MLPRegressor
does not quit fitting unexpectedly early._update_no_improvement_count()
uses the previous loss, not the best lossself.tol
as:[...] two consecutive iterations [...]
self._no_improvement_count
self._no_improvement_count
Actual Results
Versions
The text was updated successfully, but these errors were encountered: