Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enable some sort of caching - improve hyperparameter tuning #31

Open
egillax opened this issue Aug 25, 2022 · 0 comments
Open

Enable some sort of caching - improve hyperparameter tuning #31

egillax opened this issue Aug 25, 2022 · 0 comments
Labels
enhancement New feature or request

Comments

@egillax
Copy link
Collaborator

egillax commented Aug 25, 2022

When running a deep learning model that takes a long time it's very frustrating if it errors out and you have to start from scratch. It would be useful to implement some sort of caching. So if you use a saveLoc that has data in it, it would check in those and just load data and continue with the hyperparameter tuning from where it left off. This requires saving the data just before starting training.

I would also like to refactor the hyperparameter search, wrap in a try - catch and in case of cuda memory errors not stop but continue with next iteration.

@egillax egillax added the enhancement New feature or request label Aug 25, 2022
@egillax egillax mentioned this issue Sep 8, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

1 participant