-
Notifications
You must be signed in to change notification settings - Fork 683
Use cuda version PT when build with CUDA delegate #14355
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/14355
Note: Links to docs will display an error until the docs builds have been completed. ❌ 4 New Failures, 7 Unrelated FailuresAs of commit 19c2fb2 with merge base a548635 ( NEW FAILURES - The following jobs have failed:
FLAKY - The following jobs failed but were likely due to flakiness present on trunk:
BROKEN TRUNK - The following jobs failed but were present on the merge base:👉 Rebase onto the `viable/strict` branch to avoid these failures
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
This PR needs a
|
install_requirements.py
Outdated
def _check_cuda_enabled(): | ||
"""Check if CUDA delegate is enabled via CMAKE_ARGS environment variable.""" | ||
cmake_args = os.environ.get("CMAKE_ARGS", "") | ||
return "-DEXECUTORCH_BUILD_CUDA=ON" in cmake_args | ||
|
||
|
||
def _cuda_version_to_pytorch_suffix(major, minor): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we pull all cuda related functions into util
?
install_requirements.py
Outdated
_torch_url = "" | ||
|
||
|
||
def _determine_torch_url(): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This seems like it belongs to util as well
5d521f3
to
b00bc14
Compare
_torch_url_cache = "" | ||
|
||
|
||
def determine_torch_url(torch_nightly_url_base, supported_cuda_versions): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this function is only called twice is it really necessary to cache?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
main reason to cache is try not print too much noise output.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Use @functools.lru_cache
This PR enables CUDA version PT installation when building with CUDA delegate enabled from source. More specific: 1. ET will keep depending on cpu PT as long as CUDA delegate is not enabled; 2. We will choose the CUDA PT exactly match user's cuda version: if user don't have CUDA, or have CUDA but not exactly match the versions PT supported, the installation script will raise error.
This PR enables CUDA version PT installation when building with CUDA delegate enabled from source.
More specific: