You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We know that all folds were generated with the same seed in TabRepo's current suite (AutoGluon default seed), so we can go back and get those indices for each dataset for the purposes of simulating cross-validation early stopping at the fold level.
Check for datasets with <800 rows how many folds were used in the training logs. It might be using 8 always, or otherwise it could be using the old logic that uses between 5-8 folds depending on the number of rows.
Special handling for RandomForest, ExtraTrees, KNN where we did not use traditional folds when fitting
The text was updated successfully, but these errors were encountered:
We know that all folds were generated with the same seed in TabRepo's current suite (AutoGluon default seed), so we can go back and get those indices for each dataset for the purposes of simulating cross-validation early stopping at the fold level.
Reference Paper: "Don’t Waste Your Time: Early Stopping Cross-Validation"
The text was updated successfully, but these errors were encountered: