Skip to content
This repository has been archived by the owner on Nov 16, 2023. It is now read-only.

Add control of iteration used in testing #78

Open
marabout2015 opened this issue Oct 15, 2019 · 0 comments
Open

Add control of iteration used in testing #78

marabout2015 opened this issue Oct 15, 2019 · 0 comments
Labels
enhancement New feature or request

Comments

@marabout2015
Copy link
Contributor

Currently, the testing script uses the maximum number of iterations of the trained model to score the data. Add an "early_stopping_rounds" argument to the training script so it records the best iteration on validation data found, and an argument to the testing script that controls whether that is the iteration used in scoring.

@marabout2015 marabout2015 added the enhancement New feature or request label Oct 15, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

1 participant