Skip to content

Add the ability to reset hyperparameters #2561

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 16 commits into
base: main
Choose a base branch
from

Conversation

Vika-F
Copy link
Contributor

@Vika-F Vika-F commented Jun 18, 2025

Description

Static reset_hyperparameters(op) method that resets the hyperparameters of the algorithm to the default settings was added to the estimators that have hyperparameters.


PR should start as a draft, then move to ready for review state after CI is passed and all applicable checkboxes are closed.
This approach ensures that reviewers don't spend extra time asking for regular requirements.

You can remove a checkbox as not applicable only if it doesn't relate to this PR in any way.
For example, PR with docs update doesn't require checkboxes for performance while PR with any change in actual code should have checkboxes and justify how this code change is expected to affect performance (or justification should be self-evident).

Checklist to comply with before moving PR from draft:

PR completeness and readability

  • I have reviewed my changes thoroughly before submitting this pull request.
  • I have commented my code, particularly in hard-to-understand areas.
  • Git commit message contains an appropriate signed-off-by string (see CONTRIBUTING.md for details).
  • I have added a respective label(s) to PR if I have a permission for that.
  • I have resolved any merge conflicts that might occur with the base branch.

Testing

  • I have run it locally and tested the changes extensively.
  • All CI jobs are green or I have provided justification why they aren't.
  • I have extended testing suite if new functionality was introduced in this PR.

Performance

not applicable

@Vika-F Vika-F added the enhancement New feature or request label Jun 18, 2025
Copy link

codecov bot commented Jun 25, 2025

Codecov Report

Attention: Patch coverage is 93.75000% with 2 lines in your changes missing coverage. Please review.

Files with missing lines Patch % Lines
onedal/common/hyperparameters.py 88.23% 1 Missing and 1 partial ⚠️
Flag Coverage Δ
azure 79.74% <90.62%> (+0.03%) ⬆️
github 73.59% <93.75%> (+<0.01%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

Files with missing lines Coverage Δ
sklearnex/__init__.py 92.85% <100.00%> (ø)
sklearnex/_utils.py 87.95% <100.00%> (+1.65%) ⬆️
sklearnex/decomposition/pca.py 91.09% <100.00%> (ø)
sklearnex/ensemble/_forest.py 83.24% <100.00%> (ø)
sklearnex/linear_model/incremental_linear.py 83.45% <ø> (ø)
sklearnex/linear_model/linear.py 82.51% <100.00%> (-0.13%) ⬇️
onedal/common/hyperparameters.py 76.19% <88.23%> (-0.18%) ⬇️

... and 1 file with indirect coverage changes

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@Vika-F
Copy link
Contributor Author

Vika-F commented Jun 26, 2025

/intelci: run

@Vika-F Vika-F marked this pull request as ready for review June 26, 2025 11:37
@david-cortes-intel
Copy link
Contributor

@Vika-F Failure looks legit:


svd_solver = 'full'

    @pytest.mark.parametrize("svd_solver", PCA_SOLVERS)
    def test_pca_dtype_preservation(svd_solver):
>       check_pca_float_dtype_preservation(svd_solver)

decomposition\tests\test_pca.py:518: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
decomposition\tests\test_pca.py:538: in check_pca_float_dtype_preservation
    assert_allclose(pca_64.components_, pca_32.components_, rtol=2e-4)
utils\_testing.py:290: in assert_allclose
    np_assert_allclose(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

args = (<function assert_allclose.<locals>.compare at 0x000001BC11C137F0>, array([[-0.62022375, -0.15983497,  0.38316965,  0....4071014,  0.9080595 , -0.21966814],
       [ 0.12497696,  0.8811203 , -0.16712739,  0.42435375]],
      dtype=float32))
kwds = {'equal_nan': True, 'err_msg': '', 'header': 'Not equal to tolerance rtol=0.0002, atol=0', 'verbose': True}

    @wraps(func)
    def inner(*args, **kwds):
        with self._recreate_cm():
>           return func(*args, **kwds)
E           AssertionError: 
E           Not equal to tolerance rtol=0.0002, atol=0
E           
E           Mismatched elements: 4 / 12 (33.3%)
E           Max absolute difference: 0.00014655
E           Max relative difference: 0.00087689
E            x: array([[-0.620224, -0.159835,  0.38317 ,  0.66555 ],
E                  [ 0.263179,  0.240851,  0.908007, -0.21966 ],
E                  [ 0.124977,  0.881087, -0.167274,  0.424366]])
E            y: array([[-0.620258, -0.159886,  0.38311 ,  0.66554 ],
E                  [ 0.263121,  0.24071 ,  0.908059, -0.219668],
E                  [ 0.124977,  0.88112 , -0.167127,  0.424354]], dtype=float32)

It appears that something might have assumed that it was running with different parameters and needs its tolerance adjusted. Either that, or the parameters when it reaches that test might not have been reset.


# Reset hyperparameters to the default values
if daal_check_version((2025, "P", 700)):
PCA.reset_hyperparameters("fit")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To ensure this is called, better to put it into a fixture cleanup function:
https://docs.pytest.org/en/6.2.x/fixture.html#teardown-cleanup-aka-fixture-finalization

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Like in this PR for example: #2568

@Vika-F
Copy link
Contributor Author

Vika-F commented Jun 26, 2025

@Vika-F Failure looks legit:

...
It appears that something might have assumed that it was running with different parameters and needs its tolerance adjusted. Either that, or the parameters when it reaches that test might not have been reset.

But we do not use any hyperparameters with SVD solver. I.e. the changes should not affect this test case at all.
I also see that this step fails on non-Intel runners, maybe this is the cause of the difference.

@david-cortes-intel
Copy link
Contributor

@Vika-F Failure looks legit:
...
It appears that something might have assumed that it was running with different parameters and needs its tolerance adjusted. Either that, or the parameters when it reaches that test might not have been reset.

But we do not use any hyperparameters with SVD solver. I.e. the changes should not affect this test case at all. I also see that this step fails on non-Intel runners, maybe this is the cause of the difference.

I think there's some logic that modifies the solver for older version of sklearn:

sklearn_check_version("1.5")

Does this bug happen in others PRs? If not, perhaps one thing to do would be to verify which solver from oneDAL is being called in those failing environments.

@david-cortes-intel
Copy link
Contributor

@Vika-F The bug doesn't appear to affect other recent PRs, so probably something here is having an effect.

@david-cortes-intel
Copy link
Contributor

I'm now seeing the same error in a different PR:
#2577

So probably not caused by the changes here, but odd that it happens only sporadically.

@david-cortes-intel
Copy link
Contributor

That's a curious error in the CI:

>       with pytest.raises(ValueError, match=msg):
E       AssertionError: Regex pattern did not match.
E        Regex: 'X has 2 features, but \\w+ is expecting 10 features as input'
E        Input: 'Incompatible dimension for X and Y matrices: X.shape[1] == 2 while Y.shape[1] == 10'

It looks like the error is what was expected, just with a different message.

Is sklearnex perhaps copy-pastying an outdated error message from sklearn?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants