-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Multiresponse cv voronoi #15
Conversation
…tput sampling. This is a branch copy from ci.gitlab Note that this is work-in-progress! Things will be broken. This also implements changes to logging and metrics. Please check what we can keep or merge to master.
…tput sampling. This is a branch copy from ci.gitlab Note that this is work-in-progress! Things will be broken. This also implements changes to logging and metrics. Please check what we can keep or merge to master.
# Conflicts: # harlow/sampling/bayesian_optimization.py # harlow/sampling/fuzzy_lolavoronoi.py # harlow/sampling/random_sampling.py # harlow/surrogating/surrogate_model.py # harlow/utils/helper_functions.py # setup.cfg # tests/benchmark_tests/benchmark_GP_surrogates.py # tests/benchmark_tests/benchmark_samplers.py # tests/integration_tests/FuzzyLV_tests.py # tests/integration_tests/notest_surrogate.py # tests/integration_tests/test_lola_voronoi.py
metric_dict = {} | ||
if not isinstance(metric, list): | ||
metric = [metric] | ||
if metric is None or true_y is None: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @mdbTNO , I briefly looked over the integration test results. It seems that the change in the evaluate
function in line 48 of harlow/utils/helper_functions.py
leads to errors in the pipeline. Previously the evaluate
function would return zero if no test points or metric was provided. Now it will instead throw an error.
Since there may be situations where one would want to run a sampler without providing test points, I think it would be preferable to refine this part. One possible solution would be to consider the different cases:
- No metric and no test points -> Skip evaluation of metric
- Metric without test points -> Evaluate the metric and throw error if that metric requires test points (assuming there are metrics we have not implemented that do not need test points, e.g. metrics based on the variance of the prediction for GP surrogates).
- Test points without metric -> Error
- Test points and metric -> Evaluate metric for test points
This should probably be solved in a different merge request (I could look into this as well), so I am fine with merging this branch.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi Jan,
Thanks for spotting this. I completely agree with you that we should handle this according to the scheme you present.
I'll merge this branch for now and leave it as an issue we can pick up later :-)
if not isinstance(metric, list): | ||
metric = [metric] | ||
if metric is None or true_y is None: | ||
raise ValueError |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Highlighting the line after the if
statement in my last comment, for clarity.
This pull request is to include the CVVoronoi sampling method into master.
This also contains generalization of logging and evaluation across all samplers.