Skip to content

Use daal::static_threader_reduce in Linear Regression and dispatch grainSize hyperparameter #3217

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft
wants to merge 7 commits into
base: main
Choose a base branch
from

Conversation

avolkov-intel
Copy link
Contributor

@avolkov-intel avolkov-intel commented May 19, 2025

Description

Use daal::static_threader_reduce in Linear Regression algorithm to compute X^tX and X^tY matrices the same way it was done for Covariance algorithm (#3126).

Also add grainSize hyperparameter that controls the minimum number of blocks allocated to a single thread to the list of dispatched Linear Regression hyperparameters.


Checklist to comply with before moving PR from draft:

PR completeness and readability

  • I have reviewed my changes thoroughly before submitting this pull request.
  • I have commented my code, particularly in hard-to-understand areas.
  • I have updated the documentation to reflect the changes or created a separate PR with update and provided its number in the description, if necessary.
  • Git commit message contains an appropriate signed-off-by string (see CONTRIBUTING.md for details).
  • I have added a respective label(s) to PR if I have a permission for that.
  • I have resolved any merge conflicts that might occur with the base branch.

Testing

  • I have run it locally and tested the changes extensively.
  • All CI jobs are green or I have provided justification why they aren't.
  • I have extended testing suite if new functionality was introduced in this PR.

Performance

  • I have measured performance for affected algorithms using scikit-learn_bench and provided at least summary table with measured data, if performance change is expected.
  • I have provided justification why performance has changed or why changes are not expected.
  • I have provided justification why quality metrics have changed or why changes are not expected.
  • I have extended benchmarking suite and provided corresponding scikit-learn_bench PR if new measurable functionality was introduced in this PR.

@avolkov-intel
Copy link
Contributor Author

/intelci: run

@david-cortes-intel
Copy link
Contributor

The failing sklearnex test is due to small numerical differences, guess it's safe to just change the thresholds there.

But the test itself is not well designed: better would be to compare against a reference implementation like SciPy's, the same way it's done in other tests within sklearnex.

@@ -58,6 +58,7 @@ class lr_train_params_test : public lr_test<TestType, lr_train_params_test<TestT
this->max_cols_batched_ = GENERATE(50);
this->small_rows_threshold_ = GENERATE(15, 70);
this->small_rows_max_cols_batched_ = GENERATE(40);
this->grain_size_ = GENERATE(1, 10);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please add a test also for single-threaded mode, since it has if-else conditions for it.

Comment on lines +82 to +90
enum ErrorCode
{
ok = 0, /// No error
memAllocationFailed = 1, /// Memory allocation failed
intOverflow = 2, /// Integer overflow
badCast = 3 /// Cannot cast base daal::Reducer to derived class
};
/// Status of the computation.
ErrorCode errorCode;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please remove those implementation details from the Reducer class.
Because:

  1. Reducer is an interface class and should not contain any implementation specifics.
  2. It was my mistake when I defined those error codes in Covariance. It would be better to use the values defined here instead https://github.com/uxlfoundation/oneDAL/blob/main/cpp/daal/include/services/error_indexes.h#L71. It is also possible to extend this enum if needed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants