Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Always training from scratch? #13

Closed
pavelbatyr opened this issue Jan 12, 2022 · 2 comments
Closed

Always training from scratch? #13

pavelbatyr opened this issue Jan 12, 2022 · 2 comments

Comments

@pavelbatyr
Copy link

Hello! I'm interested in using DEHB for HPO of neural networks.
But I couldn't find any code related to model checkpointing. Does training for every budget start from scratch?

@Neeratyoy
Copy link
Collaborator

Hi!
Thanks for your interest in DEHB. The original design of DEHB was designed to interface with black-box functions that come with a specific fidelity value. For a NN it would be the maximum epochs. One way to approach it would be to implement model checkpointing from within the objective_function definition such that a model when queried for a higher fidelity resumes training based on some file saved to disk. However, DEHB currently doesn't support such a feature implicitly.

However, there is currently an unmerged PR #6 that might be of interest to you!
(Unfortunately, I cannot commit to a time when this PR will be merged but user feedback on the PR might expedite the merge ;) )

@pavelbatyr
Copy link
Author

Thanks a lot for the detailed answer!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants