Skip to content

[CI, enhancement] add pytorch+gpu testing ci #2494

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 77 commits into from
Jun 18, 2025

Conversation

icfaust
Copy link
Contributor

@icfaust icfaust commented May 26, 2025

Description

This PR introduces a public GPU CI job to sklearnex. It is not fully featured but does provide first GPU testing publicly. Due to issues with n_jobs support (which are being addressed in #2364), run times are extremely long but viable. The GPU is only currently used in the sklearn conformance steps, not in sklearnex/onedal testing. This is because it tests without dpctl installed for GPU offloading. It will in the future extract queues from data in combination with PyTorch, which has Intel GPU capabilities since PyTorch 2.4 (https://docs.pytorch.org/docs/stable/notes/get_start_xpu.html). This will allow GPU testing in the other steps.

This CI is important for at least 3 reasons: sklearn tests array_api using CuPy, PyTorch and array_api_strict frameworks. PyTorch is the only non __sycl_usm_array_interface__ GPU data framework which is expected to work for both sklearn and sklearnex. Therefore 1) it provides an array_api-only GPU testing framework to validate with sklearn conformance 2) it is likely the first entry point for users who which to use Intel GPU data natively (due to the user base size). 3) It validates that sklearnex can properly function without dpctl installed for GPU use, removing limitations on python versions and dependency stability issues. Note PyTorch DOES NOT FOLLOW THE ARRAY_API STANDARD, sklearn uses array_api_compat to shoe-horn in pytorch support. There are quirks associated with PyTorch and should be tested by sklearnex. This has impacts on how we design our estimators, as checking for __array_namespace__ is insufficient if we wish to support PyTorch.

Unlike other public runners, it takes a strategy of splitting apart the build and test steps in to separate jobs. The test step occurs on a CPU-only runner and on a GPU runner at the same time. It does not use a virtual environment like Conda or venv for simplicity, however it can reuse all of the previously written infrastructure.

It uses Python 3.12 and sklearn 1.4 due to simplicity (i.e. to duplicate other GPU testing systems). This will be updated in a follow up PR as it becomes further used (likely requiring different deselections).

When successful, a large increase in code coverage should be observed in codecov, as code coverage is also made available.

This should be very important for validating array_api changes in the codebase coming soon, which would otherwise be obscured by dpctl.

This required the following changes:

  • A new additional job 'Identify oneDAL nightly' is created, which removes duplication of code in ci.yml, it will identify the oneDAL build to download for all of the GitHub actions CI runners.
  • Changes to run_sklearn_tests.sh were required to get the gpu deselections to work publicly.
  • Renamed 'oneDALNightly/pip' to 'oneDALNightly/venv' to signify that a virtual environment is used instead of the package manager
  • patching of assert_all_finite would fail in combination with array_api_dispatching, changes are made in daal4py, to only use DAAL in the case it is numpy or a dataframe. As PyTorch has a different use for the size attribute, changes needed to be made for it.
  • Checking and moving data from GPU to CPU was incorrectly written for array_api, as we did not have a GPU data framework to test against. We need to verify the device via the __dlpack_device__ attribute instead, and then use asarray if __array__ is available, or from_dlpack because the __dlpack__ attribute is available. This required exposing some dlpack enums for verification.
  • The PR includes changes from [CI, Enhancement] add external pytest frameworks control #2489 which were needed to limit the running time of CI, which will focus on PyTorch and numpy for CPU and GPU.
  • Deselection of some torch tests occur in line with the original array_api rollout (ENH: adding array-api-compat and enabling array api conformance tests #2079)
  • Deselection of test_learning_curve_some_failing_fits_warning[42] because of unknown issue with _intercept_ and SVC on GPU. (Must be investigated)

This will require the following PRs afterwards (by theme):

  • [bugfix, enhancement] Address affinity bug by using threadpoolctl/joblib for n_jobs dispatching #2364 fix issues with thread affinity/ Kubernetes pod operation for n_jobs
  • Introduce PyTorch to onedal/tests/utils/_dataframes_support.py and onedal/tests/utils/_device_selection.py to have public GPU testing in sklearnex.
  • Rewrite from_data in onedal/utils/_sycl_queue_manager.py to extract queues from __dlpack__ data (special PyTorch interface already in place in pybind11).
  • Introduce a lazy loading approach for frameworks torch, dpnp, and dpctl.tensor due to load times in a centralized way (likely following strategy laid out in array_api_compat).
  • Update the sklearn version to not replicate other CI systems
  • Fix issue with SVC and _intercept_ attribute for (test_learning_curve_some_failing_fits_warning[42] sklearn conformance test)

No performance benchmarks necessary


PR should start as a draft, then move to ready for review state after CI is passed and all applicable checkboxes are closed.
This approach ensures that reviewers don't spend extra time asking for regular requirements.

You can remove a checkbox as not applicable only if it doesn't relate to this PR in any way.
For example, PR with docs update doesn't require checkboxes for performance while PR with any change in actual code should have checkboxes and justify how this code change is expected to affect performance (or justification should be self-evident).

Checklist to comply with before moving PR from draft:

PR completeness and readability

  • I have reviewed my changes thoroughly before submitting this pull request.
  • I have commented my code, particularly in hard-to-understand areas.
  • I have updated the documentation to reflect the changes or created a separate PR with update and provided its number in the description, if necessary.
  • Git commit message contains an appropriate signed-off-by string (see CONTRIBUTING.md for details).
  • I have added a respective label(s) to PR if I have a permission for that.
  • I have resolved any merge conflicts that might occur with the base branch.

Testing

  • I have run it locally and tested the changes extensively.
  • All CI jobs are green or I have provided justification why they aren't.
  • I have extended testing suite if new functionality was introduced in this PR.

Performance

  • I have measured performance for affected algorithms using scikit-learn_bench and provided at least summary table with measured data, if performance change is expected.
  • I have provided justification why performance has changed or why changes are not expected.
  • I have provided justification why quality metrics have changed or why changes are not expected.
  • I have extended benchmarking suite and provided corresponding scikit-learn_bench PR if new measurable functionality was introduced in this PR.

Copy link

codecov bot commented May 26, 2025

Codecov Report

Attention: Patch coverage is 41.17647% with 10 lines in your changes missing coverage. Please review.

Files with missing lines Patch % Lines
onedal/_device_offload.py 33.33% 6 Missing and 2 partials ⚠️
onedal/datatypes/table.cpp 0.00% 0 Missing and 2 partials ⚠️
Flag Coverage Δ
azure 79.84% <46.66%> (-0.09%) ⬇️
github 73.60% <41.17%> (+1.98%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

Files with missing lines Coverage Δ
sklearnex/utils/validation.py 69.33% <100.00%> (+0.84%) ⬆️
onedal/datatypes/table.cpp 51.92% <0.00%> (-1.02%) ⬇️
onedal/_device_offload.py 75.60% <33.33%> (-5.43%) ⬇️

... and 18 files with indirect coverage changes

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@david-cortes-intel
Copy link
Contributor

@icfaust Is #2465 meant to be merged before this?

@icfaust
Copy link
Contributor Author

icfaust commented Jun 10, 2025

@icfaust Is #2465 meant to be merged before this?

Another good question. It isn't a requirement and are independent of one another. They are related as they are both testing sklearnex on GPU vs sklearn 1.4 in CI for the first time.

@@ -18,6 +18,9 @@

from onedal import _default_backend as backend

kDLCPU = backend.kDLCPU
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are those global symbols necessary?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You highlight something that I did which wasn't so perfect. The dlpack device data should be on CPU, the most sure way to do that is to check if its the kDLCPU enum. This will only be used in _transfer_to_host, where the kDLOneAPI will be used for recognizing if a queue can be extracted. I need to get it in onedal/_device_offload.py do you think I should import it from the backend there directly? I put it in _data_conversion since its a data conversion topic, but it required importing it through 2-3 spots to get it in _device_offload. Let me know what you think.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would always import it straight from the original source. Let me know if that causes any issues.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

Comment on lines 74 to 79
# this try-catch is a PyTorch-specific fix, as Tensor.size is a function.
# The try-catch minimizes changes to most common code path (numpy arrays).
try:
too_small = X.size < 32768
except TypeError:
too_small = math.prod(X.shape) < 32768
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wouldn't this work?

Suggested change
# this try-catch is a PyTorch-specific fix, as Tensor.size is a function.
# The try-catch minimizes changes to most common code path (numpy arrays).
try:
too_small = X.size < 32768
except TypeError:
too_small = math.prod(X.shape) < 32768
too_small = (math.prod(X.shape) if xp is torch else X.size) < 32768

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Its definitely cleaner. The only problem is that we would have to have imported torch, which should be avoided unless absolutely necessary. I could modify your suggestion to use array_api_compat's is_torch_array (https://github.com/data-apis/array-api-compat/blob/main/array_api_compat/common/_helpers.py#L149). Let me know what you think.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, so we do this?

is_torch = is_torch_array(X)
too_small = (math.prod(X.shape) if is_torch else X.size) < 32768

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

Copy link
Contributor

@ethanglaser ethanglaser left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me beyond resolution of open conversations. But this change is large enough that it'd be good to wait til we can validate with internal CI before merge

Also huge props for the thorough description - makes reviewing a lot easier

bash .ci/scripts/describe_system.sh
- name: Install test requirements
run: |
pip install -r dependencies-dev
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe installation of dependencies-dev be skipped for test env

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh interesting. You are absolutely right, if we have implemented requirements-test.txt properly, then dependencies-dev shouldn't be necessary. I'll remove it and see what happens, if it works, now we would have a way of testing publicly that they work as intended separately (which we otherwise don't currently do).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done! looks like it works.

@david-cortes-intel
Copy link
Contributor

Something broke with the new coverage stats and need to investigate it.

@icfaust It's a general world-wide GCP outage.

@napetrov napetrov dismissed ahuber21’s stale review June 17, 2025 04:49

Andreas is on leave

@icfaust
Copy link
Contributor Author

icfaust commented Jun 17, 2025

/intelci: run

1 similar comment
@icfaust
Copy link
Contributor Author

icfaust commented Jun 18, 2025

/intelci: run

@icfaust
Copy link
Contributor Author

icfaust commented Jun 18, 2025

Had to switch to just using math.prod(X.shape) just like sklearn due to private CI infrastructure issues. This removes the array_api_compat dependency.

@icfaust
Copy link
Contributor Author

icfaust commented Jun 18, 2025

/intelci: run

@icfaust icfaust merged commit 95ec3ea into uxlfoundation:main Jun 18, 2025
32 checks passed
@icfaust icfaust deleted the dev/pytorch_testing_CI branch June 18, 2025 10:21
david-cortes-intel pushed a commit to david-cortes-intel/scikit-learn-intelex that referenced this pull request Jun 18, 2025
* Update _dataframes_support.py

* Update conftest.py

* Update conftest.py

* Update _dataframes_support.py

* Update _dataframes_support.py

* Update _dataframes_support.py

* Update _dataframes_support.py

* Update _dataframes_support.py

* Update _dataframes_support.py

* Update _dataframes_support.py

* Update conftest.py

* Update conftest.py

* Update conftest.py

* Update _dataframes_support.py

* Update run_test.sh

* Update conftest.py

* Update run_test.sh

* Update run_test.sh

* Update ci.yml

* Update ci.yml

* Update ci.yml

* Update ci.yml

* Update ci.yml

* Update ci.yml

* Update ci.yml

* Update ci.yml

* Update ci.yml

* Update ci.yml

* Update ci.yml

* Update ci.yml

* Update ci.yml

* Update ci.yml

* Update ci.yml

* Update ci.yml

* Update ci.yml

* Update ci.yml

* Update ci.yml

* Update ci.yml

* Update ci.yml

* Update ci.yml

* Update run_sklearn_tests.sh

* Update ci.yml

* Update ci.yml

* Update validation.py

* Update validation.py

* Update validation.py

* Update validation.py

* Update _device_offload.py

* Update _data_conversion.py

* Update __init__.py

* Update run_sklearn_tests.sh

* Update _device_offload.py

* Update _data_conversion.py

* Update validation.py

* Update table.cpp

* Update _device_offload.py

* Update _device_offload.py

* Update deselected_tests.yaml

* Update deselected_tests.yaml

* Update _device_offload.py

* Update deselected_tests.yaml

* Update ci.yml

* Update ci.yml

* Update deselected_tests.yaml

* Update ci.yml

* Update deselected_tests.yaml

* Update validation.py

* Update ci.yml

* Update __init__.py

* Update _data_conversion.py

* Update _device_offload.py

* Update ci.yml

* Update _device_offload.py

* Update validation.py
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants