-
Notifications
You must be signed in to change notification settings - Fork 16
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Continue an optimization run. #6
base: development
Are you sure you want to change the base?
Conversation
To do so, we store basically all important inforamtion (population, fitness values, bracket states, ..) as a pickle object. By reloading this object, we can continue training from this state. A thing to discuss is the current bracket state. I am not quite certain if there is another way to make sure that the brackets are correct. Also, this functionality is more about continuing and not warmstarting. The difference for me between warmstarting and restarting is that for warmstarting we should think about reloading multiple states and thus combining the populations per budget might not be so simple. However, restarting or continuing is more easy, because we just have to restore the old state. Happy to discuss this difference. Also, I have added a short logger that writes the results similar to Hpbandsters BOHB to file. (TODO: config.json is still missing).
@Neeratyoy: Could you please create a dev branch, so I don't have to push to your main branch. This would make me sleep better :-) |
done :) |
Perfect. Is now pointing to dev. |
from loguru import logger | ||
import sys | ||
|
||
logger.configure(handlers=[{"sink": sys.stdout, "level": "INFO"}]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
does bringing this here help? or does it clash with the configuration defined in optimizers/dehb.py
?
can logger
now be imported directly? or could you point me to how you're using this maybe?
This is actually a great contribution so thanks @PhMueller |
Hi, @PhMueller would it be possible to include the commit fc5fd98 for pip installability in a separate PR? Because, currently @Neeratyoy does not have time to look at all the issues in this PR. And being installable by pip is useful anyway, e.g., for me the example here: https://github.com/automl/DEHB/tree/master/utils seems to require |
Hey, Sorry for neglecting this pr. I will hopefully find some time to finish this one end of this year. In the meanwhile: @RaghuSpaceRajan, you can take a look at #11. |
This PR adds the functionality to continue an optimization run.
Also, it adds the possibility to restrict the worker to only run a single task concurrently. It happened that a worker was overloaded with tasks, however, due to mem limitations it shouldn't perform multiple tasks in parallel.
This solution is similar to this StackOverflow article
I will add a example on how to use restarting and how to limit the resources per worker.