Skip to content

Conversation

@ahn1340
Copy link
Contributor

@ahn1340 ahn1340 commented Apr 17, 2018

No description provided.

@codecov-io
Copy link

codecov-io commented Apr 17, 2018

Codecov Report

Merging #466 into development will decrease coverage by <.01%.
The diff coverage is 100%.

Impacted file tree graph

@@               Coverage Diff               @@
##           development     #466      +/-   ##
===============================================
- Coverage        78.58%   78.57%   -0.01%     
===============================================
  Files              130      130              
  Lines            10073    10074       +1     
===============================================
  Hits              7916     7916              
- Misses            2157     2158       +1
Impacted Files Coverage Δ
autosklearn/metrics/__init__.py 87.06% <100%> (+0.11%) ⬆️
autosklearn/evaluation/abstract_evaluator.py 89.47% <100%> (ø) ⬆️
..._preprocessing/select_percentile_classification.py 86.2% <0%> (-3.45%) ⬇️
autosklearn/pipeline/implementations/xgb.py 83.33% <0%> (-0.56%) ⬇️
autosklearn/automl.py 81.71% <0%> (+0.37%) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update c732a1a...a99d1d3. Read the comment docs.

@ahn1340 ahn1340 closed this Apr 17, 2018
@ahn1340 ahn1340 reopened this Apr 17, 2018
Copy link
Contributor

@mfeurer mfeurer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks mostly good, only minor change requests.

y_pred = y_true.copy()

# the best possible score of r2 loss is 1.
if metric == 'r2':
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you use the optimum attribute here?

for metric, scorer in autosklearn.metrics.CLASSIFICATION_METRICS.items():
# Skip functions not applicable for binary classification.
# TODO: Average precision should work for binary classification,
# TODO: but its behavior is not right.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not right in what sense?

y_true = np.array([1.0, 1.0, 1.0, 0.0, 0.0, 0.0])
y_pred = np.array([[0.0, 1.0], [0.0, 1.0], [0.0, 1.0], [1.0, 0.0],
[1.0, 0.0], [1.0, 0.0]])
if metric is 'log_loss':
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could this be done via the optimum attribute of the scorer?

y_pred = np.array([[1.0, 0.0, 0.0], [1.0, 0.0, 0.0],
[0.0, 1.0, 0.0], [0.0, 1.0, 0.0], [0.0, 0.0, 1.0]])
if metric is 'log_loss': # the best possible score for log_loss is 0.
previous_score = 0
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could this be done via the optimum attribute of the scorer?

continue
y_true = np.array([[1, 0, 0], [1, 1, 0], [0, 1, 1], [1, 1, 1]])
y_pred = y_true.copy()
previous_score = 1
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could this be done via the optimum attribute of the scorer?

@mfeurer mfeurer merged commit 8a8255f into automl:development Apr 20, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants