Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[MRG+1] Add sample_weight support to RidgeClassifier #4838

Merged
merged 1 commit into from Jun 9, 2015

Conversation

trevorstephens
Copy link
Contributor

Added sample_weight support to RidgeClassifier as it was in RidgeClassifierCV but not the non-CV implementation.

Also added some tests to both of the above to check it reacts as expected when compared to class_weight from the constructor.

@TomDLT
Copy link
Member

TomDLT commented Jun 9, 2015

looks good

@arjoly
Copy link
Member

arjoly commented Jun 9, 2015

probably worth looking at #4490

@agramfort
Copy link
Member

LGTM

@agramfort agramfort changed the title [MRG] Add sample_weight support to RidgeClassifier [MRG+1] Add sample_weight support to RidgeClassifier Jun 9, 2015
@trevorstephens
Copy link
Contributor Author

Thanks @arjoly , #4490 appears to only affect RidgeClassifierCV though, so wouldn't affect this PR.

@amueller
Copy link
Member

amueller commented Jun 9, 2015

thanks.

amueller added a commit that referenced this pull request Jun 9, 2015
[MRG+1] Add sample_weight support to RidgeClassifier
@amueller amueller merged commit 32b2f8e into scikit-learn:master Jun 9, 2015
@trevorstephens trevorstephens deleted the ridge_sw branch June 9, 2015 16:40
@trevorstephens
Copy link
Contributor Author

Thanks for the reviews guys !

@@ -487,6 +487,35 @@ def test_class_weights():
assert_array_almost_equal(clf.intercept_, clfa.intercept_)


def test_class_weight_vs_sample_weight():
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a nice test, but is more appropriate as a common test than one specific to Ridge, and I think it's time for there to be common tests regarding sample weight. They would best rely on:

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed @jnothman , nothing special to Ridge here, purely a test for classifiers with class_weight (constructor) and sample_weight (fit) to check equivalency and multiplicative combination. Should be able to work its way into a common test I would think. This test appears in the tree tests (though with a multi-output additional step_ and elsewhere (I think I may have pilfered or altered it from SGD originally).

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd wondered whether it was copied from elsewhere. That's upsetting.
Perhaps we need an issue to track tests that should be factored into
common...

On 10 June 2015 at 14:28, Trevor Stephens notifications@github.com wrote:

In sklearn/linear_model/tests/test_ridge.py
#4838 (comment)
:

@@ -487,6 +487,35 @@ def test_class_weights():
assert_array_almost_equal(clf.intercept_, clfa.intercept_)

+def test_class_weight_vs_sample_weight():

Agreed @jnothman https://github.com/jnothman , nothing special to Ridge
here, purely a test for classifiers with class_weight (constructor) and
sample_weight (fit) to check equivalency and multiplicative combination.
Should be able to work its way into a common test I would think. This test
appears in the tree tests (though with a multi-output additional step_ and
elsewhere (I think I may have pilfered or altered it from SGD originally).


Reply to this email directly or view it on GitHub
https://github.com/scikit-learn/scikit-learn/pull/4838/files#r32087333.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Upsetting how? That it's propagating outside of common tests?

Best I can trace right now was @dsullivan7 's #3931 (somewhat similar test on multiplicative weights), then RF & D-Tree in #3961 .. Now here in Ridge. Cannot recall now the exact lineage I used from the RF/tree PR tests. Don't see any duplicated inline comments outside of RF/Tree/Ridge on git FWIW.

Note that in tree-based classifier's I tested for equivalency in feature_importances_, and here it's coef_.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Something similar in #4215 coming soon enough too, I imagine, for the rest of the ensemble tests.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note that in tree-based classifier's I tested for equivalency in
feature_importances_, and here it's coef_.

Hence the use of assert_same_model

On 10 June 2015 at 14:51, Joel Nothman joel.nothman@gmail.com wrote:

Upsetting that we're duplicating code and then going to need to rein it in.

On 10 June 2015 at 14:49, Trevor Stephens notifications@github.com
wrote:

In sklearn/linear_model/tests/test_ridge.py
#4838 (comment)
:

@@ -487,6 +487,35 @@ def test_class_weights():
assert_array_almost_equal(clf.intercept_, clfa.intercept_)

+def test_class_weight_vs_sample_weight():

Upsetting how? That it's propagating outside of common tests?

Best I can trace right now was @dsullivan7
https://github.com/dsullivan7 's #3931
#3931 (somewhat
similar test on multiplicative weights), then RF & D-Tree in #3961
#3961 .. Now here in
Ridge. Cannot recall now the exact lineage I used from the RF/tree PR
tests. Don't see any duplicated inline comments outside of RF/Tree/Ridge on
git FWIW.

Note that in tree-based classifier's I tested for equivalency in
feature_importances_, and here it's coef_.


Reply to this email directly or view it on GitHub
https://github.com/scikit-learn/scikit-learn/pull/4838/files#r32087957.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Gotcha

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Adding one that checks if sample_weights is equivalent to doing the corresponding class_weights would be nice, too. I feel it should be pretty straight-forward, though. Why would you need a helper? Just check if decision_function is almost_equal.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

True... as long as we don't have any classification-oriented transformers
that are not predictors that accept sample and class weights!

On 11 June 2015 at 03:46, Andreas Mueller notifications@github.com wrote:

In sklearn/linear_model/tests/test_ridge.py
#4838 (comment)
:

@@ -487,6 +487,35 @@ def test_class_weights():
assert_array_almost_equal(clf.intercept_, clfa.intercept_)

+def test_class_weight_vs_sample_weight():

Adding one that checks if sample_weights is equivalent to doing the
corresponding class_weights would be nice, too. I feel it should be
pretty straight-forward, though. Why would you need a helper? Just check if
decision_function is almost_equal.


Reply to this email directly or view it on GitHub
https://github.com/scikit-learn/scikit-learn/pull/4838/files#r32145069.

Returns
-------
self : returns an instance of self.
"""
if sample_weight is None:
sample_weight = 1.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you also modify this line:
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/linear_model/ridge.py#L285
so that has_sw is False if sample_weight == 1.0.

sample_weight is currently implemented with a rescaling of the data. We can avoid copying the data in the case sample_weight == 1.0.

Another option is to check for sample_weight == 1.0 directly in _rescale_data.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sample_weight == 1.0 is semantically equivalent to sample_weight.ndim == 0?

On 10 June 2015 at 16:11, Mathieu Blondel notifications@github.com wrote:

In sklearn/linear_model/ridge.py
#4838 (comment)
:

     Returns
     -------
     self : returns an instance of self.
     """
  •    if sample_weight is None:
    
  •        sample_weight = 1.
    

Could you also modify this line:

https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/linear_model/ridge.py#L285
so that has_sw is False if sample_weight == 1.0.

sample_weight is currently implemented with a rescaling of the data. We
can avoid copying the data in the case sample_weight == 1.0.

Another option is to check for sample_weight == 1.0 directly in
_rescale_data.


Reply to this email directly or view it on GitHub
https://github.com/scikit-learn/scikit-learn/pull/4838/files#r32090535.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For scalar values other than 1, sample_weight has the same effect as changing C or 1/ alpha. So this is not very useful, we could potentially raise an exception.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I added #4846 to track this issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

7 participants