New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Fix] long running regression #272
[Fix] long running regression #272
Conversation
Codecov Report
@@ Coverage Diff @@
## development #272 +/- ##
===============================================
+ Coverage 81.67% 81.76% +0.08%
===============================================
Files 151 151
Lines 8646 8643 -3
Branches 1328 1327 -1
===============================================
+ Hits 7062 7067 +5
+ Misses 1108 1104 -4
+ Partials 476 472 -4
Continue to review full report at Codecov.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the PR.
I would like to ask you how this change relates to long running regression
?
metrics = get_metrics(dataset_properties=X['dataset_properties']) | ||
if 'additional_metrics' in X: | ||
metrics.extend(get_metrics(dataset_properties=X['dataset_properties'], names=X['additional_metrics'])) | ||
if 'optimize_metric' in X and 'optimize_metric' not in [m.name for m in metrics]: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you put a single-line comment for what 'optimize_metric' not in [m.name for m in metrics]
this means?
Pytest approximate does not work with compare equal.
Also, simplified the additional_metrics code, which caused problem to the long regression.