-
Notifications
You must be signed in to change notification settings - Fork 5.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Tune] Add HEBOSearch
Searcher
#13863
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Generally this looks great - thanks! Left a couple of minor comments.
Also, the tests are currently failing:
E ModuleNotFoundError: No module named 'bo'
And
E AssertionError: HEBO must be installed!. You can install HEBO with the command: `pip install git+https://github.com/huawei-noah/noah-research.git#subdirectory=HEBO`.
Maybe @amogkam can help with how to include this into the tune dependencies?
python/ray/tune/suggest/hebo.py
Outdated
def _validate_warmstart(parameter_names: List[str], | ||
points_to_evaluate: List[Union[List, Dict]], | ||
evaluated_rewards: List): | ||
if points_to_evaluate: | ||
if not isinstance(points_to_evaluate, list): | ||
raise TypeError( | ||
"points_to_evaluate expected to be a list, got {}.".format( | ||
type(points_to_evaluate))) | ||
for point in points_to_evaluate: | ||
if not isinstance(point, (dict, list)): | ||
raise TypeError( | ||
f"points_to_evaluate expected to include list or dict, " | ||
f"got {point}.") | ||
|
||
if not len(point) == len(parameter_names): | ||
raise ValueError("Dim of point {}".format(point) + | ||
" and parameter_names {}".format( | ||
parameter_names) + " do not match.") | ||
|
||
if points_to_evaluate and evaluated_rewards: | ||
if not isinstance(evaluated_rewards, list): | ||
raise TypeError( | ||
"evaluated_rewards expected to be a list, got {}.".format( | ||
type(evaluated_rewards))) | ||
if not len(evaluated_rewards) == len(points_to_evaluate): | ||
raise ValueError( | ||
"Dim of evaluated_rewards {}".format(evaluated_rewards) + | ||
" and points_to_evaluate {}".format(points_to_evaluate) + | ||
" do not match.") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks similar (identical?) to the one we use in skopt. Should we just move this either to utils
or to the base searcher class and use the same function then? We might want to use this for other searchers in the futures as well.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is indeed identical. We could make it a part of the Searcher class, I think.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@krfricke Should I make this change in this PR?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yep, that would be great!
self._metric_op = -1. | ||
elif self._mode == "min": | ||
self._metric_op = 1. | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should check here that self._space
is not empty (apparently this check seems to be missing from some other searchers?)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I copied it from one of the existing ones, so it would appear so. I'll add it here, though the rest should probably be fixed in another PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@krfricke Should it throw an exception if self._space
is empty?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes - it's enough to do it for this Searcher here. We'll tackle the other ones somewhere else
Co-authored-by: Kai Fricke <krfricke@users.noreply.github.com>
As for the whole dependency thing - I am still waiting for any response from the principal author (made a PR on github and sent an email). If that doesn't come soon and considering that HEBO is released under MIT license, I think the best way would be for me to create my own fork, sort out those small issues, and upload it to PyPi (with full notice of authorship), thus removing the need to install from git. I'd be happy to transfer ownership of the package to the principal author should they prefer that too. |
Sounds good to me! |
Hey @Yard1 we actually changed the dependency stuff very recently. If you merge master, you should then just have to add the new dependency to |
Most of the test failures seem to be unrelated to this PR. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Almost ready, the changes should be easy to do
# The config will be automatically converted to SkOpt's search space | ||
|
||
# Optional: Pass the parameter space yourself | ||
# space = { | ||
# "width": (0, 20), | ||
# "height": (-100, 100), | ||
# "activation": ["relu", "tanh"] | ||
# } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: this comment refers to SkOpt.
Related: Is this commented-out space definition actually compatible with HEBO?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No, it's not compatible. I'll put in the actual design space example there.
python/ray/tune/suggest/hebo.py
Outdated
def _validate_warmstart(parameter_names: List[str], | ||
points_to_evaluate: List[Union[List, Dict]], | ||
evaluated_rewards: List): | ||
if points_to_evaluate: | ||
if not isinstance(points_to_evaluate, list): | ||
raise TypeError( | ||
"points_to_evaluate expected to be a list, got {}.".format( | ||
type(points_to_evaluate))) | ||
for point in points_to_evaluate: | ||
if not isinstance(point, (dict, list)): | ||
raise TypeError( | ||
f"points_to_evaluate expected to include list or dict, " | ||
f"got {point}.") | ||
|
||
if not len(point) == len(parameter_names): | ||
raise ValueError("Dim of point {}".format(point) + | ||
" and parameter_names {}".format( | ||
parameter_names) + " do not match.") | ||
|
||
if points_to_evaluate and evaluated_rewards: | ||
if not isinstance(evaluated_rewards, list): | ||
raise TypeError( | ||
"evaluated_rewards expected to be a list, got {}.".format( | ||
type(evaluated_rewards))) | ||
if not len(evaluated_rewards) == len(points_to_evaluate): | ||
raise ValueError( | ||
"Dim of evaluated_rewards {}".format(evaluated_rewards) + | ||
" and points_to_evaluate {}".format(points_to_evaluate) + | ||
" do not match.") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yep, that would be great!
self._metric_op = -1. | ||
elif self._mode == "min": | ||
self._metric_op = 1. | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes - it's enough to do it for this Searcher here. We'll tackle the other ones somewhere else
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just some suggestions to create better error messages. Let me know if you're happy with these suggestions. (please note that this might break the linter, so if you choose to commit the suggestions, pull locally, run the format script, and push again a final time)
In the latest check the HEBO search was flaky, but it seems that this is due to a general pytorch error? Can you take a quick look to confirm?
Otherwise this is great. Thank you very much for the contribution!
Co-authored-by: Kai Fricke <krfricke@users.noreply.github.com>
@krfricke Thanks, I applied the suggestions and reformatted it. Not sure what caused the test to flake before but I had no errors when running it locally, so it could have just been a fluke. |
Awesome! Thanks again! |
Why are these changes needed?
This PR adds a Tune Searcher using HEBO (Heteroscedastic Evolutionary Bayesian Optimization). HEBO is a cutting edge black-box optimization algorithm created by Huawei's Noah Ark's lab. It was the winning submission to NeurIPS 2020 Black-Box Optimisation Challenge which included real-life objective functions, such as tuning performance of standard machine learning models on real data sets.
As of now, this PR is a draft due to some minor issues in HEBO implementation that prevent it from offering the same functionality as the rest of the algorithms included already in Tune, as well as due to lack of a PyPi package (installing from github causes an error due to fixable dependency conflicts). I am trying to get in contact with authors of HEBO in order to solve those issues. The code in this PR has full functionality.
Docs, examples and tests have been included, though CI may fail due to issues described above.
Checks
scripts/format.sh
to lint the changes in this PR.