You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello! I'm interested in using DEHB for HPO of neural networks.
But I couldn't find any code related to model checkpointing. Does training for every budget start from scratch?
The text was updated successfully, but these errors were encountered:
Hi!
Thanks for your interest in DEHB. The original design of DEHB was designed to interface with black-box functions that come with a specific fidelity value. For a NN it would be the maximum epochs. One way to approach it would be to implement model checkpointing from within the objective_function definition such that a model when queried for a higher fidelity resumes training based on some file saved to disk. However, DEHB currently doesn't support such a feature implicitly.
However, there is currently an unmerged PR #6 that might be of interest to you!
(Unfortunately, I cannot commit to a time when this PR will be merged but user feedback on the PR might expedite the merge ;) )
Hello! I'm interested in using DEHB for HPO of neural networks.
But I couldn't find any code related to model checkpointing. Does training for every budget start from scratch?
The text was updated successfully, but these errors were encountered: