Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.

Already on GitHub? Sign in to your account

RFC: Improve the performance and usability of linear algebra on CUDA devices #78581

Open
xwang233 opened this issue May 31, 2022 · 2 comments
Open
Labels
module: cuda Related to torch.cuda, and CUDA support in general module: linear algebra Issues related to specialized linear algebra operations in PyTorch; includes matrix multiply matmul module: magma related to magma linear algebra cuda support triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module

Comments

@xwang233
Copy link
Collaborator

xwang233 commented May 31, 2022

馃殌 The feature, motivation and pitch

Currently, the torch.linalg (https://pytorch.org/docs/stable/linalg.html) package provides linear algebra functionalities in pytorch. The CUDA backend is supported by cuSOLVER and MAGMA libraries.

For now, linear algebra operators in pytorch are implemented in either cuSOLVER or MAGMA, or both. Users can use

torch.backends.cuda.preferred_linalg_library(backend='cusolver')

to prefer one of the two backends. Available options (python str) are default (using heuristics), cusolver, or magma. See doc for details https://pytorch.org/docs/stable/backends.html#torch.backends.cuda.preferred_linalg_library.

However, libraries have limitations and heuristics can't be perfect on all devices, library versions, input batch sizes, and input shapes. We'd like to collect user feedbacks and feature requests on the performance and usability of pytorch linear algebra on CUDA devices. Please leave a comment if you have any suggestions. Thank you!

Alternatives

No response

Additional context

No response

cc @jianyuh @nikitaved @pearu @mruberry @walterddr @IvanYashchuk @xwang233 @lezcano @ptrblck @ngimel

@xwang233 xwang233 added module: linear algebra Issues related to specialized linear algebra operations in PyTorch; includes matrix multiply matmul module: magma related to magma linear algebra cuda support module: cuda Related to torch.cuda, and CUDA support in general labels May 31, 2022
@vadimkantorov
Copy link
Contributor

vadimkantorov commented May 31, 2022

There were some recent related discussions on API design in #76440. My opinion: for the finest-grain level control allow ops to take op-level execution hints (in the most general case, a dict or namedtuple)

@Balandat
Copy link
Contributor

Balandat commented Jun 1, 2022

cc @gpleiss, @jacobrgardner

@soulitzer soulitzer added triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module and removed triage review labels Jun 6, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
module: cuda Related to torch.cuda, and CUDA support in general module: linear algebra Issues related to specialized linear algebra operations in PyTorch; includes matrix multiply matmul module: magma related to magma linear algebra cuda support triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module
Projects
None yet
Development

No branches or pull requests

5 participants