Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implementing checkpointing and recovery #398

Open
aangelos28 opened this issue May 22, 2024 · 2 comments
Open

Implementing checkpointing and recovery #398

aangelos28 opened this issue May 22, 2024 · 2 comments

Comments

@aangelos28
Copy link

Is there a built-in way to do checkpointing on the Bayesian optimization using the GP surrogate and later recover its state, if say the application unexpectedly terminates?

One possible way could be to checkpoint the inputs/outputs and then feed all this data into the strategy and retrain the model upon restarting the application, but there is the cost of retraining and would require statically seeding the RNG. Are there any other drawbacks to this?

Alternatively, what else needs to be checkpointed? The BoTorch model?

Thanks!

@bertiqwerty
Copy link
Contributor

Hi there. Currently, you can serialize you strategy into json including your data and re-start from there coming with the drawbacks you mentioned. Everything more efficient than that is currently up to the user. Note that ENTMOOT for instance does not use BoTorch models.

@aangelos28
Copy link
Author

I see. Thanks for the clarification!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants