Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ENH Add support for np.nan values in SplineTransformer #28043

Open
wants to merge 44 commits into
base: main
Choose a base branch
from

Conversation

StefanieSenger
Copy link
Contributor

@StefanieSenger StefanieSenger commented Jan 2, 2024

Reference Issues/PRs

Closes #26793

What does this implement/fix? Explain your changes.

Adds support for np.nan values in SplineTransformer.

  • adds param handle_missing : {'error', 'constant'} to init, where error preserves the previous behaviour and constant handles nan values by setting their spline values to all 0s
  • adds new test (but needs to be extended)

Yet to solve:

  1. I believe in _get_base_knot_positions I have to prepare _weighted_percentile for excluding nan values similarity to how np.nanpercentile excludes nan values for the calculation of the base knots. I tried, but it was quite tricky. Edit: Just found that np.nanpercentile will have a sample_weight option soon: PR 24254 in numpy

  2. Should an error also be raised in case the SplineTransformer was instantiated with (handle_missing="error"), then fitted without missing values and the X then contains missing values in transform?

Copy link

github-actions bot commented Jan 2, 2024

✔️ Linting Passed

All linting checks passed. Your pull request is in excellent shape! ☀️

Generated for commit: c1278a6. Link to the linter CI: here

@StefanieSenger StefanieSenger changed the title ENH Adds support for np.nan values in SplineTransformer ENH Add support for np.nan values in SplineTransformer Jan 8, 2024
Copy link
Member

@ogrisel ogrisel left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The PR looks very good but it needs to be merged with main (there are conflicts in the changelog).

Also, I think the get_output_feature_names() method needs to be updated. The tests should be expanded accordingly, maybe to also include a test with .set_output(transform="pandas") (this is how I found out that there was a problem with the output feature names).

@ogrisel
Copy link
Member

ogrisel commented Feb 29, 2024

So, should sparse_output=True and handle_missing="indicator" be prevented from being used together (and show an explicit error message) ,

I think we should add support for using those two options together.

Copy link
Member

@ogrisel ogrisel left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here is a more in depth pass of review. There is indeed a fundamental problem with the current code: the missingness indicators from the training set (when calling .fit or .fit_transform) should not be stored as an estimator attribute and reapplied to the test set (when calling .transform). Instead the missingness pattern from the test set should be extracted.

See more details below:

sklearn/preprocessing/_polynomial.py Show resolved Hide resolved
sklearn/preprocessing/_polynomial.py Outdated Show resolved Hide resolved
sklearn/preprocessing/_polynomial.py Outdated Show resolved Hide resolved
if self.include_bias:
return XBS
return self._concatenate_indicator(XBS)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The missingness indicators computed from the X passed to .transform (which can be a test set) should be passed as argument to _concatenate_indicator instead of reusing the mask extracted from the training set.

sklearn/preprocessing/tests/test_polynomial.py Outdated Show resolved Hide resolved
Copy link
Contributor Author

@StefanieSenger StefanieSenger left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey @ogrisel, thanks for reviewing and your help.
I went through your comments and could resolve most of the issues.

I've named the new option handle_missing="constant", but that's just an idea. I found that indicator doesn't fit so well anymore, if we don't add an indicator column to X. Though with constant as well as with zeros I feel that it's not quite clear from the naming, where in the process the nans become something else (before or after calculating the splines). Maybe we can find a name, that conveys that info.

There are quite a few things, I am a bit confused about:
Generally, I don't know if we want SplineTransformer to change or keep behaviour if nan values are present.

If we want it to keep behaviour, instead of having this test data for comparing equality:

    X_nan = np.array([[1, 1], [2, 2], [3, 3], [np.nan, 4], [4, 4]])
    X = np.array([[1, 1], [2, 2], [3, 3], [4, 4]])

it should maybe rather be

    X_nan = np.array([[1, 1], [2, 2], [3, 3], [np.nan, 4], [4, 4]])
    X = np.array([[1, 1], [2, 2], [3, 3], [99, 4], [4, 4]])

and in this case, the current implementation is wrong. Maybe you can shed a light on this so that I know how to go on.

I will check the issue with the feature names next.

sklearn/preprocessing/_polynomial.py Show resolved Hide resolved
sklearn/preprocessing/_polynomial.py Outdated Show resolved Hide resolved
sklearn/preprocessing/tests/test_polynomial.py Outdated Show resolved Hide resolved
@StefanieSenger
Copy link
Contributor Author

I was trying to find about the problem with the feature names, that you have mentioned here, @ogrisel, but I cannot recreate it. Maybe it's been resolved when I worked on the other issues?

This is what I tried (using the code from the existing feature_name_out test):

def test_spline_transformer_feature_names_with_nans():
    """Test that SplineTransformer generates correct feature names if nan values are present."""
    X_nan = np.array([[1, 1], [2, 2], [3, 3], [np.nan, 4], [4, 4]])
    splt = SplineTransformer(
        degree=3,
        n_knots=3,
        handle_missing="constant",
        include_bias=True).fit(X_nan)
    feature_names = splt.get_feature_names_out()
    assert_array_equal(
        feature_names,
        [
            "x0_sp_0",
            "x0_sp_1",
            "x0_sp_2",
            "x0_sp_3",
            "x0_sp_4",
            "x1_sp_0",
            "x1_sp_1",
            "x1_sp_2",
            "x1_sp_3",
            "x1_sp_4",
        ],
    )

    splt = SplineTransformer(
    degree=3,
    n_knots=3,
    handle_missing="constant",
    include_bias=False).fit(X_nan)
    feature_names = splt.get_feature_names_out(["a", "b"])
    assert_array_equal(
        feature_names,
        [
            "a_sp_0",
            "a_sp_1",
            "a_sp_2",
            "a_sp_3",
            "b_sp_0",
            "b_sp_1",
            "b_sp_2",
            "b_sp_3",
        ],
    )

    splt.set_output(transform="pandas")
    X_transformed = splt.transform(X_nan)
    feature_names = splt.get_feature_names_out(["a", "b"])
    assert_array_equal(
        feature_names,
        [
            "a_sp_0",
            "a_sp_1",
            "a_sp_2",
            "a_sp_3",
            "b_sp_0",
            "b_sp_1",
            "b_sp_2",
            "b_sp_3",
        ],
    )

Everything behaves as it should, I believe. But also maybe I didn't understand what you exactly ran into.

@ogrisel
Copy link
Member

ogrisel commented Mar 15, 2024

The get_feature_names problem only happens when appending extra output features for the missing indicators (they would then need to be named in that case).

EDIT: I will try to answer your other questions/comments early enough next week.

@ogrisel
Copy link
Member

ogrisel commented Mar 15, 2024

About handle_missing="constant", I really prefer handle_missing="zero" or handle_missing="zeros" or handle_missing="ignore" to either convey that missing values are encoded as zero spline feature values or are encoded in a value such that a simple downstream linear model would basically "ignore" them.

@StefanieSenger StefanieSenger marked this pull request as ready for review April 18, 2024 13:51
@StefanieSenger
Copy link
Contributor Author

StefanieSenger commented Apr 18, 2024

Hey @ogrisel, can you give me some feedback?

My current understanding is that if we introduce new 0-values in X_transformed (due to nan values in X), then we also expect different stats for the transformer compared to when no nan values are present.

This would mean, that we expect (and test for)

  • a different output format in case of nans (I have already written the tests like this)

  • that the B-splines won't sum up to one * n_features (that I need to define for the test)

Copy link
Member

@ogrisel ogrisel left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here is another pass of feedback with some suggestion to extend testing for more cases, but overall it looks good.

I still need to investigate more to understand the interactions between the scipy version and the sparse output format and the code coverage warning.

doc/whats_new/v1.5.rst Outdated Show resolved Hide resolved
sklearn/preprocessing/_polynomial.py Show resolved Hide resolved
sklearn/preprocessing/_polynomial.py Outdated Show resolved Hide resolved
sklearn/preprocessing/_polynomial.py Outdated Show resolved Hide resolved
sklearn/preprocessing/_polynomial.py Outdated Show resolved Hide resolved

# prepare mask for nan values
mask = _get_mask(X_nan, np.nan)
extended_mask = np.repeat(mask, spline.bsplines_[0].c.shape[1], axis=1)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would rename mask and extended to nan_mask and encoded_nan_mask respectively to be more explicit.

Or alternatively missing_output_mask and missing_output_mask. In which case the code in transform could be updated accordingly.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have changed into nan_mask and encoded_nan_mask. I didn't get your second paragraph, though. Were you saying I should replace extended_nan_indicator as well in transform()?

assert_allclose_dense_sparse(
X_transformed_same_shape, X_transformed_different_shapes
)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We could further extend this test to cover the following:

Suggested change
# Check that nan values are always encoded as zeros, even in columns where
# no missing values were observed at training time.
all_missing_row_encoded = spline.transform([[np.nan, np.nan]])
if sparse_output:
all_missing_row_encoded = all_missing_row_encoded.toarray()
assert_allclose(all_missing_row_encoded, 0)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we want this instead of raising for values outside range with extrapolation == "error"?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's a good question. I would say so: meaning, I would expect 0 encoded nans even in columns that have never seen a nan at fit time for the sake of consistency when running cross-validation with rare nan values.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm afraid that I don't understand your intention here. Sorry for this.

If we test if there can be zero-encoded values after transform, even if SplineTransformer hasn't seen any nans in fit, then we could test it with a less critical input like fitting without any nans and transforming an X with only one nan value. And input with a whole nan column (see my comment below for why not all nans) could be tested below in a separate test. Would that make sense to pull those test cases apart this way?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

then we could test it with a less critical input like fitting without any nans and transforming an X with only one nan value

I agree, this is compatible with what I suggested in:

https://github.com/scikit-learn/scikit-learn/pull/28043/files#r1592724271

in this snippet, spline has been fit on one column with some nan value and another column without any nan values. In both cases the resulting encoding should be zero.

The all-nan case has been moved in your recent commits to a dedicated test and I find it indeed helps separating concerns and make the tests easier to follow.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay, so do I understand correctly, that you don't want me to change, add or do anything here? This is resolved?

sklearn/preprocessing/_polynomial.py Outdated Show resolved Hide resolved
sklearn/preprocessing/_polynomial.py Show resolved Hide resolved
Copy link
Contributor Author

@StefanieSenger StefanieSenger left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @ogrisel for reviewing and your suggestions. I have applied those changes or commented. Do you want to have another look?

sklearn/preprocessing/_polynomial.py Outdated Show resolved Hide resolved
sklearn/preprocessing/_polynomial.py Outdated Show resolved Hide resolved
sklearn/preprocessing/_polynomial.py Show resolved Hide resolved
assert_allclose_dense_sparse(
X_transformed_same_shape, X_transformed_different_shapes
)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we want this instead of raising for values outside range with extrapolation == "error"?


# prepare mask for nan values
mask = _get_mask(X_nan, np.nan)
extended_mask = np.repeat(mask, spline.bsplines_[0].c.shape[1], axis=1)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have changed into nan_mask and encoded_nan_mask. I didn't get your second paragraph, though. Were you saying I should replace extended_nan_indicator as well in transform()?

sklearn/preprocessing/_polynomial.py Outdated Show resolved Hide resolved
Copy link
Member

@ogrisel ogrisel left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Another pass of feedback below:

sklearn/preprocessing/_polynomial.py Outdated Show resolved Hide resolved
sklearn/preprocessing/_polynomial.py Outdated Show resolved Hide resolved
sklearn/preprocessing/_polynomial.py Outdated Show resolved Hide resolved
else: # extrapolation in ("constant", "linear")
xmin, xmax = spl.t[degree], spl.t[-degree - 1]
# spline values at boundaries
f_min, f_max = spl(xmin), spl(xmax)
# values outside of the feature space during `fit` and nan values get
# masked out:
mask = (xmin <= X[:, i]) & (X[:, i] <= xmax)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To make the code more straightforward to follow, I think mask could be renamed to something more explicit such as inside_range_mask and mask_inv to outside_range_mask.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, that's a good initiative.

sklearn/preprocessing/_polynomial.py Outdated Show resolved Hide resolved
sklearn/preprocessing/_polynomial.py Show resolved Hide resolved
assert_allclose_dense_sparse(
X_transformed_same_shape, X_transformed_different_shapes
)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's a good question. I would say so: meaning, I would expect 0 encoded nans even in columns that have never seen a nan at fit time for the sake of consistency when running cross-validation with rare nan values.

sklearn/preprocessing/tests/test_polynomial.py Outdated Show resolved Hide resolved
)

# check that if X has a feature of all nans SplineTransformer works as usual
spline.transform(X_nan_full_column)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The result of this call should be checked to see that the first column is all zero valued.

EDIT: I see that spline.transform(X_nan_full_column) is called again and its output checked a few lines below.

To avoid confusion, I think the all-nan column case should be checked in a separate test function that would call SplineTransformer on X_allmissing = np.asarray([[np.nan], [np.nan], [np.nan]]) for instance.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh yes, I will make in into a separate test.

I'll be keeping X_nan_full_column = np.array([[np.nan, np.nan], [np.nan, 1]]) for now.

The reason: X_allmissing = np.asarray([[np.nan], [np.nan], [np.nan]]) is a very special case, because in this case np.nanmin(x) is also nan and BSpline.design_matrix() raises. There is no way to avoid this, I believe, because we cannot set the existing nan values in x to any valid value that is within the feature space.

Reproducable:

    X = np.array([[1, 1], [2, 2], [3, 3], [4, 5], [4, 4]])
    X_allmissing = np.array([[np.nan, np.nan], [np.nan, np.nan]])

    spline = SplineTransformer(
        degree=2,
        n_knots=3,
        handle_missing="zeros",
    )

    spline.fit(X)

    all_missing_column_encoded = spline.transform(X_allmissing)

So for now, this case fails with an error from scipy, but I think it's a quite a readable error message.

assert (X_nan_transform[encoded_nan_mask] == 0).all()

# check that nan values are always encoded as zeros, even in columns where
# no missing values were observed at training time.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This comment does not seem to reflect what is being tested below: here X_nan_full_column has 2 columns, the fist one is all-nan and the second column has a mix of missing an non-missing values.

The comment seem to refer to the test case I proposed in https://github.com/scikit-learn/scikit-learn/pull/28043/files#r1592724271 which is different: check how nan values are encoded at test time in columns that have no missing value at training time (e.g. the second column of X_nan).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can see how this is confusing. I will try to re-formulate this in the new test below (test_spline_transformer_handles_all_nans), as discussed above. The attempt is to pull those two test groups apart. Is it clearer and consistent like this?

(The history of this here is that I have taken the test you have proposed and modified it.)

Copy link
Contributor Author

@StefanieSenger StefanieSenger left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I went through your review @ogrisel, thanks for the comments. I hope the tests are clearer separated now.

sklearn/preprocessing/_polynomial.py Outdated Show resolved Hide resolved
else: # extrapolation in ("constant", "linear")
xmin, xmax = spl.t[degree], spl.t[-degree - 1]
# spline values at boundaries
f_min, f_max = spl(xmin), spl(xmax)
# values outside of the feature space during `fit` and nan values get
# masked out:
mask = (xmin <= X[:, i]) & (X[:, i] <= xmax)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, that's a good initiative.

sklearn/preprocessing/_polynomial.py Show resolved Hide resolved
)

# check that if X has a feature of all nans SplineTransformer works as usual
spline.transform(X_nan_full_column)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh yes, I will make in into a separate test.

I'll be keeping X_nan_full_column = np.array([[np.nan, np.nan], [np.nan, 1]]) for now.

The reason: X_allmissing = np.asarray([[np.nan], [np.nan], [np.nan]]) is a very special case, because in this case np.nanmin(x) is also nan and BSpline.design_matrix() raises. There is no way to avoid this, I believe, because we cannot set the existing nan values in x to any valid value that is within the feature space.

Reproducable:

    X = np.array([[1, 1], [2, 2], [3, 3], [4, 5], [4, 4]])
    X_allmissing = np.array([[np.nan, np.nan], [np.nan, np.nan]])

    spline = SplineTransformer(
        degree=2,
        n_knots=3,
        handle_missing="zeros",
    )

    spline.fit(X)

    all_missing_column_encoded = spline.transform(X_allmissing)

So for now, this case fails with an error from scipy, but I think it's a quite a readable error message.

assert (X_nan_transform[encoded_nan_mask] == 0).all()

# check that nan values are always encoded as zeros, even in columns where
# no missing values were observed at training time.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can see how this is confusing. I will try to re-formulate this in the new test below (test_spline_transformer_handles_all_nans), as discussed above. The attempt is to pull those two test groups apart. Is it clearer and consistent like this?

(The history of this here is that I have taken the test you have proposed and modified it.)

assert_allclose_dense_sparse(
X_transformed_same_shape, X_transformed_different_shapes
)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm afraid that I don't understand your intention here. Sorry for this.

If we test if there can be zero-encoded values after transform, even if SplineTransformer hasn't seen any nans in fit, then we could test it with a less critical input like fitting without any nans and transforming an X with only one nan value. And input with a whole nan column (see my comment below for why not all nans) could be tested below in a separate test. Would that make sense to pull those test cases apart this way?

sklearn/preprocessing/_polynomial.py Outdated Show resolved Hide resolved
sklearn/preprocessing/_polynomial.py Outdated Show resolved Hide resolved
Comment on lines +1058 to +1061
# The column is all np.nan valued. Replace it by a constant
# column with an arbitrary non-nan value inside: the minimum
# value within the whole feature space:
x[:] = np.nanmin(X)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We don't need to depend on the content of other input columns to encode the current all-nan column. Any constant value will be spline-encoded to zero, so using a constant 0 input is perfectly fine and much cheaper (it does not requires to rescan the full input array).

Suggested change
# The column is all np.nan valued. Replace it by a constant
# column with an arbitrary non-nan value inside: the minimum
# value within the whole feature space:
x[:] = np.nanmin(X)
# The column is all np.nan valued. Replace it by a constant
# column with an arbitrary non-nan value inside.
x[:] = 0

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note that change might require us to ensure that there are no nan-knots in the output of the np.percentile call in the fit method (I gave more details inline as inline comment in _get_base_knot_positions).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The comment for li. 768-769 also applies here.

sklearn/preprocessing/_polynomial.py Outdated Show resolved Hide resolved
sklearn/preprocessing/_polynomial.py Outdated Show resolved Hide resolved
sklearn/preprocessing/_polynomial.py Outdated Show resolved Hide resolved
@@ -750,8 +762,15 @@ def _get_base_knot_positions(X, n_knots=10, knots="uniform", sample_weight=None)
)

if sample_weight is None:
knots = np.percentile(X, percentiles, axis=0)
knots = np.nanpercentile(X, percentiles, axis=0)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's possible that might need to replace np.nan by 0.0 to be able to encode missing values in all-nan columns without raising an error.

Since any constant column (all percentiles are the same) will be spline encoded to 0.0, it does not matter which value we use to represent the constant knots values derived from an all-nan training data column.

Note that I am not 100% sure whether this is needed or not.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The comment for li. 768-769 also applies here.

@@ -765,8 +784,8 @@ def _get_base_knot_positions(X, n_knots=10, knots="uniform", sample_weight=None)
# `else` is therefore safe.
# Disregard observations with zero weight.
mask = slice(None, None, 1) if sample_weight is None else sample_weight > 0
x_min = np.amin(X[mask], axis=0)
x_max = np.amax(X[mask], axis=0)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Similarly, if np.isnan(x_min) (which means we have an all-nan column), then replacing x_min and x_max by 0 might be helpful to treat this column as a regular constant column and always encode the output to 0.0, whatever the input.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have tried this but actually, it is not necessary to replace nan values here, because np.amin(X[mask], axis=0) and np.amax(X[mask], axis=0) are calculated from the whole input data (X) as well. So as long as not the whole input data is all nans, then those two values will be non-nans and valid.

However, in my experiment on a separate branch pytest sklearn/preprocessing/tests/test_polynomial.py -k test_spline_transformer_handles_all_nans still fails for extrapolation="error" and sparse output, because obviously making x[:] = 0 in li. 1072 might be outside the feature range. This is in fact the reason why I had previously figured out that x[:] = np.nanmin(X) is a working solution and the only working solution I could find so far.

But maybe I did something wrong and this is not what you meant. Anyways, if x[:] = np.nanmin(X) is going to be made less computationally heavy, then it needs to work for the extrapolation="error" and sparse output case as well.

StefanieSenger and others added 2 commits May 23, 2024 09:20
Copy link
Contributor Author

@StefanieSenger StefanieSenger left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ogrisel Thank you for going through it again. I appreciate it. :)

I did apply your change suggestions, but I am again struggling to apply this suggestion, because as it is it breaks the the extrapolation=error and sparse_output=True test if a whole column is nans and I am not sure if you were aware of this when you suggested it and also I tend to think that it's not possible any other way, is it?

Would you maybe have another look at this concern specifically?

sklearn/preprocessing/_polynomial.py Outdated Show resolved Hide resolved
@@ -765,8 +784,8 @@ def _get_base_knot_positions(X, n_knots=10, knots="uniform", sample_weight=None)
# `else` is therefore safe.
# Disregard observations with zero weight.
mask = slice(None, None, 1) if sample_weight is None else sample_weight > 0
x_min = np.amin(X[mask], axis=0)
x_max = np.amax(X[mask], axis=0)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have tried this but actually, it is not necessary to replace nan values here, because np.amin(X[mask], axis=0) and np.amax(X[mask], axis=0) are calculated from the whole input data (X) as well. So as long as not the whole input data is all nans, then those two values will be non-nans and valid.

However, in my experiment on a separate branch pytest sklearn/preprocessing/tests/test_polynomial.py -k test_spline_transformer_handles_all_nans still fails for extrapolation="error" and sparse output, because obviously making x[:] = 0 in li. 1072 might be outside the feature range. This is in fact the reason why I had previously figured out that x[:] = np.nanmin(X) is a working solution and the only working solution I could find so far.

But maybe I did something wrong and this is not what you meant. Anyways, if x[:] = np.nanmin(X) is going to be made less computationally heavy, then it needs to work for the extrapolation="error" and sparse output case as well.

@@ -750,8 +762,15 @@ def _get_base_knot_positions(X, n_knots=10, knots="uniform", sample_weight=None)
)

if sample_weight is None:
knots = np.percentile(X, percentiles, axis=0)
knots = np.nanpercentile(X, percentiles, axis=0)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The comment for li. 768-769 also applies here.

Comment on lines +1058 to +1061
# The column is all np.nan valued. Replace it by a constant
# column with an arbitrary non-nan value inside: the minimum
# value within the whole feature space:
x[:] = np.nanmin(X)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The comment for li. 768-769 also applies here.

assert_allclose_dense_sparse(
X_transformed_same_shape, X_transformed_different_shapes
)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay, so do I understand correctly, that you don't want me to change, add or do anything here? This is resolved?

@StefanieSenger
Copy link
Contributor Author

StefanieSenger commented May 26, 2024

Ah, I just stumbled over Imputation for missing values from the docs. I think we should include "allow_nan" in _more_tags() here, correct?

@lorentzenchr
Copy link
Member

Can you ping me once the a reviewer approved?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Handle np.nan / missing values in SplineTransformer
4 participants