Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -82,6 +82,11 @@ public sealed class AveragedPerceptronTrainer : AveragedLinearTrainer<BinaryPred
/// </summary>
public sealed class Options : AveragedLinearOptions
{
public Options()
{
NumberOfIterations = 10;
Copy link
Contributor

@justinormont justinormont Jul 6, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see this correctly changed the number of iterations in the MAML based unit tests, for example:

-Warning: Skipped 15 instances with missing features during training (over 1 iterations; 15 inst/iter)
+Warning: Skipped 150 instances with missing features during training (over 10 iterations; 15 inst/iter)

Do we have a unit test for AveragedPerceptron using the Estimator API? If you haven't, it would be good to verify the new defaults take hold for the AP Estimator API.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We do. The OvaAveragedPerceptron test in OvaTests.cs is an Estimator API test. I have confirmed using that test that the new defaults are there correctly as well.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SGTM

}

/// <summary>
/// A custom <a href="https://en.wikipedia.org/wiki/Loss_function">loss</a>.
/// </summary>
Expand Down
Original file line number Diff line number Diff line change
@@ -1,10 +1,10 @@
maml.exe CV tr=AveragedPerceptron threads=- cali=PAV dout=%Output% data=%Data% seed=1
Automatically adding a MinMax normalization transform, use 'norm=Warn' or 'norm=No' to turn this behavior off.
Warning: Skipped 8 instances with missing features during training (over 1 iterations; 8 inst/iter)
Warning: Skipped 80 instances with missing features during training (over 10 iterations; 8 inst/iter)
Training calibrator.
PAV calibrator: piecewise function approximation has 6 components.
PAV calibrator: piecewise function approximation has 5 components.
Automatically adding a MinMax normalization transform, use 'norm=Warn' or 'norm=No' to turn this behavior off.
Warning: Skipped 8 instances with missing features during training (over 1 iterations; 8 inst/iter)
Warning: Skipped 80 instances with missing features during training (over 10 iterations; 8 inst/iter)
Training calibrator.
PAV calibrator: piecewise function approximation has 6 components.
Warning: The predictor produced non-finite prediction values on 8 instances during testing. Possible causes: abnormal data or the predictor is numerically unstable.
Expand All @@ -13,43 +13,43 @@ Confusion table
||======================
PREDICTED || positive | negative | Recall
TRUTH ||======================
positive || 132 | 2 | 0.9851
positive || 133 | 1 | 0.9925
negative || 9 | 211 | 0.9591
||======================
Precision || 0.9362 | 0.9906 |
OVERALL 0/1 ACCURACY: 0.968927
Precision || 0.9366 | 0.9953 |
OVERALL 0/1 ACCURACY: 0.971751
LOG LOSS/instance: Infinity
Test-set entropy (prior Log-Loss/instance): 0.956998
LOG-LOSS REDUCTION (RIG): -Infinity
AUC: 0.992809
AUC: 0.994403
Warning: The predictor produced non-finite prediction values on 8 instances during testing. Possible causes: abnormal data or the predictor is numerically unstable.
TEST POSITIVE RATIO: 0.3191 (105.0/(105.0+224.0))
Confusion table
||======================
PREDICTED || positive | negative | Recall
TRUTH ||======================
positive || 102 | 3 | 0.9714
negative || 4 | 220 | 0.9821
positive || 100 | 5 | 0.9524
negative || 3 | 221 | 0.9866
||======================
Precision || 0.9623 | 0.9865 |
OVERALL 0/1 ACCURACY: 0.978723
LOG LOSS/instance: 0.239330
Precision || 0.9709 | 0.9779 |
OVERALL 0/1 ACCURACY: 0.975684
LOG LOSS/instance: 0.227705
Test-set entropy (prior Log-Loss/instance): 0.903454
LOG-LOSS REDUCTION (RIG): 0.735095
AUC: 0.997279
LOG-LOSS REDUCTION (RIG): 0.747961
AUC: 0.997619

OVERALL RESULTS
---------------------------------------
AUC: 0.995044 (0.0022)
Accuracy: 0.973825 (0.0049)
Positive precision: 0.949217 (0.0130)
Positive recall: 0.978252 (0.0068)
Negative precision: 0.988579 (0.0020)
Negative recall: 0.970617 (0.0115)
AUC: 0.996011 (0.0016)
Accuracy: 0.973718 (0.0020)
Positive precision: 0.953747 (0.0171)
Positive recall: 0.972459 (0.0201)
Negative precision: 0.986580 (0.0087)
Negative recall: 0.972849 (0.0138)
Log-loss: Infinity (NaN)
Log-loss reduction: -Infinity (NaN)
F1 Score: 0.963412 (0.0034)
AUPRC: 0.990172 (0.0037)
F1 Score: 0.962653 (0.0011)
AUPRC: 0.992269 (0.0025)

---------------------------------------
Physical memory usage(MB): %Number%
Expand Down
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
AveragedPerceptron
AUC Accuracy Positive precision Positive recall Negative precision Negative recall Log-loss Log-loss reduction F1 Score AUPRC Learner Name Train Dataset Test Dataset Results File Run Time Physical Memory Virtual Memory Command Line Settings
0.995044 0.973825 0.949217 0.978252 0.988579 0.970617 Infinity -Infinity 0.963412 0.990172 AveragedPerceptron %Data% %Output% 99 0 0 maml.exe CV tr=AveragedPerceptron threads=- cali=PAV dout=%Output% data=%Data% seed=1
0.996011 0.973718 0.953747 0.972459 0.98658 0.972849 Infinity -Infinity 0.962653 0.992269 AveragedPerceptron %Data% %Output% 99 0 0 maml.exe CV tr=AveragedPerceptron threads=- cali=PAV dout=%Output% data=%Data% seed=1

Loading