You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Nov 14, 2023. It is now read-only.
I'm getting the following error while setting early_stopping=True in TuneSearchCV where estimator = LGBMClassifier(early_stopping_rounds=50) :
ValueError: For early stopping, at least one dataset and eval metric is required for evaluation
The same works fine for XGBoost. A couple of related questions:
Why isn't it mandatory for the user to set early_stopping_rounds within the estimator while setting early_stopping=True?
Still not sure how max_iters comes into play. When I set n_trials=10 and max_iters=10, it seems to be running 100 trials anyway. How is the early_stopping happening?
Isn't it better to merge early_stopping and early_stopping_rounds like how it is done in LightGBMTunerCV?
If another level of early stopping has to be implemented across trials (mentioned here), will the same ASHA scheduler work?
The text was updated successfully, but these errors were encountered:
I think it's not something we can influence, as this is early stopping inside LightGBM itself. Our early stopping is implemented on top of early stopping that may be inside estimators themselves. Are you using the release version or the master branch? LightGBM early stopping is not present in the former IIRC. I followed the logic @inventormc implemented for xgboost, they may have a better idea about the details, but the way I understand it is that instead of refitting a model completely for each CV fold, it instead essentially fits it on all the previous folds + the current one. Therefore, it incrementally fits the estimator on a bigger and bigger portion of the dataset each time, stopping the trial if there is no improvement.
I'm getting the following error while setting
early_stopping=True
inTuneSearchCV
whereestimator = LGBMClassifier(early_stopping_rounds=50)
:ValueError: For early stopping, at least one dataset and eval metric is required for evaluation
The same works fine for XGBoost. A couple of related questions:
early_stopping_rounds
within the estimator while settingearly_stopping=True
?max_iters
comes into play. When I setn_trials=10
andmax_iters=10
, it seems to be running 100 trials anyway. How is the early_stopping happening?early_stopping
andearly_stopping_rounds
like how it is done in LightGBMTunerCV?The text was updated successfully, but these errors were encountered: