Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix calculation of the metric score for hyperopt #1031

Merged
merged 13 commits into from Dec 24, 2020

Conversation

w4nderlust
Copy link
Collaborator

@w4nderlust w4nderlust commented Nov 29, 2020

The calculation of the metric score now tries to use the metric at the epoch of best validation performance during training if possible, otherwise uses eval stats instead (for instance when the metric used does is only calculated at eval time rather than at training time) and wars the user about the fact that is saving the model is skipped, then the eval statistics are obtained with the model at the last epoch rather than the model at the epoch of best validation performance.

Fix #1030

@w4nderlust w4nderlust merged commit 8af2181 into master Dec 24, 2020
@w4nderlust w4nderlust deleted the fix_hyperopt_score_calc branch December 24, 2020 02:11
milyiyo pushed a commit to milyiyo/ludwig that referenced this pull request Dec 24, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Hyperopt metric_score uses last epoch instead of best epoch
1 participant