-
Notifications
You must be signed in to change notification settings - Fork 182
Support for sklearn 1.6 conformance testing #2465
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Support for sklearn 1.6 conformance testing #2465
Conversation
Codecov ReportAll modified and coverable lines are covered by tests ✅
Flags with carried forward coverage won't be shown. Click here to find out more.
🚀 New features to boost your workflow:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Any impact of these extra checks on throughput or memory?
Generally just additions of sklearn version checks so should have negligible impact. Also for reference, there are roughly 3000 new tests running in 1.6.1 compared to 1.5.2, can yield 10-20% increase in runtimes for these steps (which amounts to a minute or two of runtime). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Relatively straightforward, just some small questions/changes. I think we can get this easily ready for merge for when private CI is available again. The only actual change is to duplicate the Ridge changes to LinearRegression and the incrementals.
@@ -325,6 +325,15 @@ def _onedal_fit(self, X, y, sample_weight, queue=None): | |||
self._onedal_estimator.fit(X, y, queue=queue) | |||
self._save_attributes() | |||
|
|||
if sklearn_check_version("1.6"): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you do this for Linear Regression and the related incremental algos so that they all behave the same/ reduce maintenance?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am not sure its relevant for the others, it originates from scikit-learn/scikit-learn#19746 which specifically addresses Ridge
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
https://github.com/scikit-learn/scikit-learn/blob/main/sklearn/linear_model/tests/test_common.py#L197 Is the answer hidden within the linked PR. Place notes into LinearRegression and IncrementalLinearRegression about this deviation, and add this change in return values to IncrementalRidge.
@ethanglaser ping me directly once changes are made, so we can run private CI and merge this quickly. |
Description
PR should start as a draft, then move to ready for review state after CI is passed and all applicable checkboxes are closed.
This approach ensures that reviewers don't spend extra time asking for regular requirements.
You can remove a checkbox as not applicable only if it doesn't relate to this PR in any way.
For example, PR with docs update doesn't require checkboxes for performance while PR with any change in actual code should have checkboxes and justify how this code change is expected to affect performance (or justification should be self-evident).
Checklist to comply with before moving PR from draft:
PR completeness and readability
Testing