-
Notifications
You must be signed in to change notification settings - Fork 49
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: Add method to get the best configuration directly from Tuner, add com… #767
Conversation
…ments on how to rerun the best configuration found
Codecov ReportAttention:
Additional details and impacted files@@ Coverage Diff @@
## main #767 +/- ##
==========================================
- Coverage 64.06% 62.98% -1.09%
==========================================
Files 436 437 +1
Lines 29136 28904 -232
==========================================
- Hits 18666 18204 -462
- Misses 10470 10700 +230
☔ View full report in Codecov by Sentry. |
f"```tuner.trial_backend.start_trial(config={config}, checkpoint_trial_id={trial_id})``` to start from " | ||
f"last checkpoint (your script should have stored a checkpoint)" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here or in the FAQ entry, would it make sense to explain when you would use best_config()
versus a checkpoint?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is not really versus, you can only restart from a checkpoint if your script supports checkpointing which may not be the case.
I do not think it would make sense to explain checkpointing there as it has its own set of FAQ items, for instance:
examples/launch_plot_results.py
Outdated
# Print the best configuration found from the tuner and retrain it | ||
trial_id, best_config = tuner.best_config() | ||
tuner.trial_backend.start_trial(config=best_config) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe plot again, and hopefully show improvement? Or consider splitting out into a separate retraining example?
Otherwise it feels a bit random - why train again and then do nothing with it after?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One use-case could be to run with a larger budget, I do not have one use-case personally but I know some people asks for this so it probably have an example showing how it can be done.
Co-authored-by: Wes Kendrick <jkkndr@amazon.com>
…ments on how to rerun the best configuration found
Also updates
metric_name_mode
to:metric
argument which is an inputverbose
which is always set to true and overlaps logging verbosityBy submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.