Skip to content

[maintenance] lazy load dpnp.tensor/dpnp and prepare for array_api lazy importing #2509

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 43 commits into
base: main
Choose a base branch
from

Conversation

icfaust
Copy link
Contributor

@icfaust icfaust commented Jun 5, 2025

Description

Dpctl and dpnp are quasi-dependencies which will silently error out if not installed. This is done at import time throughout the codebase, meaning that it is mixed into the codebase in a difficult manner. As the number of supported data frameworks are increased, such a strategy is unsustainable. Lazy loading of the necessary packages must be done, as the load time of follow-on frameworks like PyTorch are non-negligible (>1s). If we were to follow the same strategy, load times of sklearnex would be even longer even if pytorch isn't used but is available. This will compound as we would add framework support. Cleanly separating and isolating their use is necessary.

Therefore we need to first move dpnp and dpctl.tensor support to a lazy loading approach which will then be extended by follow-on frameworks. The next step will be pytorch queue extraction, which will require this infrastructure.

The strategy will follow that of array_api_compat which can check for namespaces without importing the actual modules, and for the direct use of the frameworks, a depedency injection + monkeypatching scheme is used with decorator lazy_import.

NOTE TO REVIEWERS: Let me know if I should do a performance benchmarks for this.


PR should start as a draft, then move to ready for review state after CI is passed and all applicable checkboxes are closed.
This approach ensures that reviewers don't spend extra time asking for regular requirements.

You can remove a checkbox as not applicable only if it doesn't relate to this PR in any way.
For example, PR with docs update doesn't require checkboxes for performance while PR with any change in actual code should have checkboxes and justify how this code change is expected to affect performance (or justification should be self-evident).

Checklist to comply with before moving PR from draft:

PR completeness and readability

  • I have reviewed my changes thoroughly before submitting this pull request.
  • I have commented my code, particularly in hard-to-understand areas.
  • I have updated the documentation to reflect the changes or created a separate PR with update and provided its number in the description, if necessary.
  • Git commit message contains an appropriate signed-off-by string (see CONTRIBUTING.md for details).
  • I have added a respective label(s) to PR if I have a permission for that.
  • I have resolved any merge conflicts that might occur with the base branch.

Testing

  • I have run it locally and tested the changes extensively.
  • All CI jobs are green or I have provided justification why they aren't.
  • I have extended testing suite if new functionality was introduced in this PR.

Performance

  • I have measured performance for affected algorithms using scikit-learn_bench and provided at least summary table with measured data, if performance change is expected.
  • I have provided justification why performance has changed or why changes are not expected.
  • I have provided justification why quality metrics have changed or why changes are not expected.
  • I have extended benchmarking suite and provided corresponding scikit-learn_bench PR if new measurable functionality was introduced in this PR.

try:
too_small = X.size < 32768
except TypeError:
too_small = math.prod(X.shape) < 32768
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could also use np.prod, since numpy is already imported throughout the codebase.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

https://github.com/scikit-learn/scikit-learn/blob/73a8a656b8df6d02cf88ef8f9cf98373a3f42051/sklearn/utils/_array_api.py#L215 Not entirely sure how numpy would interact with pytorch in that case. Could check that if you want, but its following the precedent set by sklearn itself



@functools.lru_cache(100)
def _is_subclass_fast(cls: type, modname: str, clsname: str) -> bool:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would this work if one of those array classes is subsetted by the user?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nope, but neither would array_api_compat, meaning that steps before in sklearnex are likely to have thrown an error: https://github.com/data-apis/array-api-compat/blob/main/array_api_compat/common/_helpers.py#L63

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

actually let me check this, i may be wrong

return array


@lazy_import("dpctl.memory")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wouldn't importing the module inside the function have the same effect?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Trying to avoid adding an unnecessary slowdown via the dictionary search of sys.modules. I don't think it impacts the readability as it is, and follows precedent set by other codebases like sqlite3: https://stackoverflow.com/a/61647085

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't follow. Their idea is to use the module multiple times, but here it gets only used inside a single function. Why would that lazy loader decorator be more efficient than importing the module inside of the function?

Copy link

codecov bot commented Jun 8, 2025

Codecov Report

Attention: Patch coverage is 76.81159% with 32 lines in your changes missing coverage. Please review.

Files with missing lines Patch % Lines
onedal/datatypes/_sycl_usm.py 82.05% 5 Missing and 2 partials ⚠️
onedal/linear_model/logistic_regression.py 0.00% 5 Missing ⚠️
onedal/utils/_third_party.py 88.88% 4 Missing and 1 partial ⚠️
onedal/ensemble/forest.py 0.00% 4 Missing ⚠️
sklearnex/ensemble/_forest.py 0.00% 4 Missing ⚠️
onedal/_device_offload.py 75.00% 3 Missing ⚠️
onedal/datatypes/tests/common.py 60.00% 2 Missing ⚠️
onedal/utils/_array_api.py 83.33% 1 Missing and 1 partial ⚠️
Flag Coverage Δ
azure 79.90% <76.08%> (+0.06%) ⬆️
github ?

Flags with carried forward coverage won't be shown. Click here to find out more.

Files with missing lines Coverage Δ
onedal/datatypes/__init__.py 100.00% <100.00%> (ø)
onedal/utils/_sycl_queue_manager.py 72.13% <100.00%> (-1.72%) ⬇️
sklearnex/_device_offload.py 73.33% <100.00%> (-5.42%) ⬇️
onedal/datatypes/tests/common.py 90.74% <60.00%> (-1.42%) ⬇️
onedal/utils/_array_api.py 82.92% <83.33%> (-1.86%) ⬇️
onedal/_device_offload.py 76.66% <75.00%> (+1.05%) ⬆️
onedal/ensemble/forest.py 72.63% <0.00%> (-0.87%) ⬇️
sklearnex/ensemble/_forest.py 79.55% <0.00%> (-4.18%) ⬇️
onedal/linear_model/logistic_regression.py 28.26% <0.00%> (-1.37%) ⬇️
onedal/utils/_third_party.py 88.88% <88.88%> (ø)
... and 1 more

... and 49 files with indirect coverage changes

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@icfaust
Copy link
Contributor Author

icfaust commented Jun 18, 2025

/intelci: run

@icfaust icfaust marked this pull request as ready for review June 18, 2025 14:12
# limitations under the License.
# ==============================================================================

"""Utilities for accessing third party pacakges such as DPNP, DPCtl.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
"""Utilities for accessing third party pacakges such as DPNP, DPCtl.
"""Utilities for accessing third party packages such as DPNP, DPCtl.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure if third_party is the most correct term for these frameworks. Is frameworks_support or frameworks_compat better?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd agree with that except for the fact that we centralize the import of SyclQueue for use in a number of locations there (which isn't part of a framework) and that we already have an equivalent 'datatypes' onedal module.

Comment on lines +126 to +127
self.classes_ = xp.unique(y)
except AttributeError:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A comment why this error type might be expected is needed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants