-
Notifications
You must be signed in to change notification settings - Fork 25.8k
[ATen][Sparse] Use Third-Party Eigen for sparse addmm #101814
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/101814
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit e92166b with merge base e872bf8 ( This comment was automatically generated by Dr. CI and updates every 15 minutes. |
|
@pytorchbot label "ciflow/trunk" |
|
Thank you for the pull request @Aidyn-A - which issue does this address? Do you have timings on the performance? |
|
Hi @cpuhrsch,
Among existing issues I found this one: I do not have performance numbers at the moment, but I can provide some. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Couple of comments, however all of them assume that there is general agreement on introducing an eigen dependency.
|
@cpuhrsch has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
|
I'm going to import this to get a sense for the internal build setup and whether this needs to be co-landed. |
|
@cpuhrsch has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
|
Various internal build system changes should make this GH1 landable, but I'm reimporting to verify. I'll be on PTO in a few days, but back July 10th. If it doesn't land before I go on PTO, I'll pick this back up then. |
| @@ -0,0 +1,381 @@ | |||
| #include <ATen/native/eigen/SparseBlasImpl.h> | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you move this file into
aten/src/ATen/native/*sparse*/eigen/SparseBlasImpl.cpp instead of aten/src/ATen/native/eigen/SparseBlasImpl.cpp?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hope 6a91e00 is what you meant
|
@cpuhrsch has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
|
Hi @cpuhrsch, |
|
@Aidyn-A - Yes, could you please try to rebase and I'll confirm that it works internally? Please mention me once that's done. |
|
@pytorchbot rebase |
|
@pytorchbot started a rebase job onto refs/remotes/origin/viable/strict. Check the current status here |
|
Successfully rebased |
6e8cb4a to
2e428d2
Compare
|
@cpuhrsch has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
|
@pytorchbot started a rebase job onto refs/remotes/origin/viable/strict. Check the current status here |
|
Successfully rebased |
5d21600 to
cec2a79
Compare
|
@pytorchbot rebase |
|
@pytorchbot started a rebase job onto refs/remotes/origin/viable/strict. Check the current status here |
|
Successfully rebased |
cec2a79 to
e92166b
Compare
|
@cpuhrsch re-opening this PR. Is there anything I can do to resolve the internal failures? |
…155357) This pull request adds the following ops for sparse matrices using Eigen library: ```python add(a_csr, b_csr) add(a_csc, b_csc) addmm(c_csr, a_csr, b_csr) addmm(c_csr, a_csr, b_csc) addmm(c_csr, a_csc, b_csc) addmm(c_csr, a_csc, b_csr) addmm(c_csc, a_csr, b_csr) addmm(c_csc, a_csr, b_csc) addmm(c_csc, a_csc, b_csc) addmm(c_csc, a_csc, b_csr) ``` Currently, the operations for sparse matrices on CPU are available through MKL only. The non-existence of MKL on `aarch64` causes the unavailability of these ops on any machines with ARM based CPUs, including Apple Silicon, AWS Graviton and NVIDIA Grace. This PR addresses this issue by using Eigen as a backend for the above ops. This is a re-factored version of my previous PR #101814. The main difference with the old one, this does not enable Eigen by default. Pull Request resolved: #155357 Approved by: https://github.com/pearu, https://github.com/eqy
…155357) This pull request adds the following ops for sparse matrices using Eigen library: ```python add(a_csr, b_csr) add(a_csc, b_csc) addmm(c_csr, a_csr, b_csr) addmm(c_csr, a_csr, b_csc) addmm(c_csr, a_csc, b_csc) addmm(c_csr, a_csc, b_csr) addmm(c_csc, a_csr, b_csr) addmm(c_csc, a_csr, b_csc) addmm(c_csc, a_csc, b_csc) addmm(c_csc, a_csc, b_csr) ``` Currently, the operations for sparse matrices on CPU are available through MKL only. The non-existence of MKL on `aarch64` causes the unavailability of these ops on any machines with ARM based CPUs, including Apple Silicon, AWS Graviton and NVIDIA Grace. This PR addresses this issue by using Eigen as a backend for the above ops. This is a re-factored version of my previous PR #101814. The main difference with the old one, this does not enable Eigen by default. Pull Request resolved: #155357 Approved by: https://github.com/pearu, https://github.com/eqy Co-authored-by: Eli Uriegas <eliuriegas@meta.com>
…ytorch#155357) This pull request adds the following ops for sparse matrices using Eigen library: ```python add(a_csr, b_csr) add(a_csc, b_csc) addmm(c_csr, a_csr, b_csr) addmm(c_csr, a_csr, b_csc) addmm(c_csr, a_csc, b_csc) addmm(c_csr, a_csc, b_csr) addmm(c_csc, a_csr, b_csr) addmm(c_csc, a_csr, b_csc) addmm(c_csc, a_csc, b_csc) addmm(c_csc, a_csc, b_csr) ``` Currently, the operations for sparse matrices on CPU are available through MKL only. The non-existence of MKL on `aarch64` causes the unavailability of these ops on any machines with ARM based CPUs, including Apple Silicon, AWS Graviton and NVIDIA Grace. This PR addresses this issue by using Eigen as a backend for the above ops. This is a re-factored version of my previous PR pytorch#101814. The main difference with the old one, this does not enable Eigen by default. Pull Request resolved: pytorch#155357 Approved by: https://github.com/pearu, https://github.com/eqy
…ytorch#155357) This pull request adds the following ops for sparse matrices using Eigen library: ```python add(a_csr, b_csr) add(a_csc, b_csc) addmm(c_csr, a_csr, b_csr) addmm(c_csr, a_csr, b_csc) addmm(c_csr, a_csc, b_csc) addmm(c_csr, a_csc, b_csr) addmm(c_csc, a_csr, b_csr) addmm(c_csc, a_csr, b_csc) addmm(c_csc, a_csc, b_csc) addmm(c_csc, a_csc, b_csr) ``` Currently, the operations for sparse matrices on CPU are available through MKL only. The non-existence of MKL on `aarch64` causes the unavailability of these ops on any machines with ARM based CPUs, including Apple Silicon, AWS Graviton and NVIDIA Grace. This PR addresses this issue by using Eigen as a backend for the above ops. This is a re-factored version of my previous PR pytorch#101814. The main difference with the old one, this does not enable Eigen by default. Pull Request resolved: pytorch#155357 Approved by: https://github.com/pearu, https://github.com/eqy Co-authored-by: Eli Uriegas <eliuriegas@meta.com>
This pull request adds a functionality for addmm with the following layouts:
using Eigen for non-MKL builds.
Edit: thanks to the comment #101814 (comment) the following ops were also added: