Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Multiresponse cv voronoi #15

Merged
merged 7 commits into from
Sep 12, 2022
Merged

Conversation

merijnpepijndebakker
Copy link
Collaborator

This pull request is to include the CVVoronoi sampling method into master.
This also contains generalization of logging and evaluation across all samplers.

bakkermpd added 4 commits July 8, 2022 15:34
…tput sampling.

This is a branch copy from ci.gitlab

Note that this is work-in-progress! Things will be broken.

This also implements changes to logging and metrics. Please check what we can keep or merge to master.
…tput sampling.

This is a branch copy from ci.gitlab

Note that this is work-in-progress! Things will be broken.

This also implements changes to logging and metrics. Please check what we can keep or merge to master.
# Conflicts:
#	harlow/sampling/bayesian_optimization.py
#	harlow/sampling/fuzzy_lolavoronoi.py
#	harlow/sampling/random_sampling.py
#	harlow/surrogating/surrogate_model.py
#	harlow/utils/helper_functions.py
#	setup.cfg
#	tests/benchmark_tests/benchmark_GP_surrogates.py
#	tests/benchmark_tests/benchmark_samplers.py
#	tests/integration_tests/FuzzyLV_tests.py
#	tests/integration_tests/notest_surrogate.py
#	tests/integration_tests/test_lola_voronoi.py
metric_dict = {}
if not isinstance(metric, list):
metric = [metric]
if metric is None or true_y is None:
Copy link
Collaborator

@JanKoune JanKoune Sep 12, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @mdbTNO , I briefly looked over the integration test results. It seems that the change in the evaluate function in line 48 of harlow/utils/helper_functions.py leads to errors in the pipeline. Previously the evaluate function would return zero if no test points or metric was provided. Now it will instead throw an error.

Since there may be situations where one would want to run a sampler without providing test points, I think it would be preferable to refine this part. One possible solution would be to consider the different cases:

  1. No metric and no test points -> Skip evaluation of metric
  2. Metric without test points -> Evaluate the metric and throw error if that metric requires test points (assuming there are metrics we have not implemented that do not need test points, e.g. metrics based on the variance of the prediction for GP surrogates).
  3. Test points without metric -> Error
  4. Test points and metric -> Evaluate metric for test points

This should probably be solved in a different merge request (I could look into this as well), so I am fine with merging this branch.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi Jan,

Thanks for spotting this. I completely agree with you that we should handle this according to the scheme you present.
I'll merge this branch for now and leave it as an issue we can pick up later :-)

if not isinstance(metric, list):
metric = [metric]
if metric is None or true_y is None:
raise ValueError
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Highlighting the line after the if statement in my last comment, for clarity.

@merijnpepijndebakker merijnpepijndebakker merged commit bf426da into master Sep 12, 2022
@merijnpepijndebakker merijnpepijndebakker deleted the multiresponse__CV_Voronoi branch September 12, 2022 17:27
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants