-
Notifications
You must be signed in to change notification settings - Fork 21.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enable USE_CUDA
#92640
Enable USE_CUDA
#92640
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/92640
Note: Links to docs will display an error until the docs builds have been completed. ❌ 5 FailuresAs of commit fd715bc: NEW FAILURES - The following jobs have failed:
BROKEN TRUNK - The following jobs failed but were present on the merge base 438f12d:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
This pull request was exported from Phabricator. Differential Revision: D42616147 |
3404947
to
bc77bc4
Compare
This pull request was exported from Phabricator. Differential Revision: D42616147 |
Summary: Pull Request resolved: pytorch#92640 `USE_CUDA` is needed in the bazel definitions to ensure that `USE_CUDA` is applied everywhere it should be. Test Plan: Sandcastle Differential Revision: D42616147 fbshipit-source-id: 0d034f8bb2d3a9d30dc3ec41e10795eaa74073f0
This pull request was exported from Phabricator. Differential Revision: D42616147 |
bc77bc4
to
983ff1b
Compare
Summary: Pull Request resolved: pytorch#92640 `USE_CUDA` is needed in the bazel definitions to ensure that `USE_CUDA` is applied everywhere it should be. Test Plan: Sandcastle Differential Revision: D42616147 fbshipit-source-id: 1662e0fba3f07ab2ea61df711b90c74cf44f2bfa
This pull request was exported from Phabricator. Differential Revision: D42616147 |
983ff1b
to
7f17549
Compare
Summary: Pull Request resolved: pytorch#92640 `USE_CUDA` is needed in the bazel definitions to ensure that `USE_CUDA` is applied everywhere it should be. Test Plan: Sandcastle Differential Revision: D42616147 fbshipit-source-id: fe4274d51c6d80a92ae7de14813366a97ebdbff8
7f17549
to
e65a0ca
Compare
This pull request was exported from Phabricator. Differential Revision: D42616147 |
Summary: Pull Request resolved: pytorch#92640 `USE_CUDA` is needed in the bazel definitions to ensure that `USE_CUDA` is applied everywhere it should be. Test Plan: Sandcastle Differential Revision: D42616147 fbshipit-source-id: 0baee98c3de9ce56bb2b05899a818659ffc8b076
This pull request was exported from Phabricator. Differential Revision: D42616147 |
e65a0ca
to
4d0f33d
Compare
Summary: Pull Request resolved: pytorch#92640 `USE_CUDA` is needed in the bazel definitions to ensure that `USE_CUDA` is applied everywhere it should be. Test Plan: Sandcastle Differential Revision: D42616147 fbshipit-source-id: e0d3c36c3f521081a3b753940f1b700f60c4b949
This pull request was exported from Phabricator. Differential Revision: D42616147 |
4d0f33d
to
2b357d9
Compare
Summary: Pull Request resolved: pytorch#92640 `USE_CUDA` is needed in the bazel definitions to ensure that `USE_CUDA` is applied everywhere it should be. Test Plan: Sandcastle Change ``` fbcode/caffe2/c10/cuda/test/build.bzl ``` to ``` dsa_tests = [ "impl/CUDAAssertionsTest_1_var_test.cu", "impl/CUDAAssertionsTest_catches_stream.cu", "impl/CUDAAssertionsTest_catches_thread_and_block_and_device.cu", "impl/CUDAAssertionsTest_from_2_processes.cu", "impl/CUDAAssertionsTest_multiple_writes_from_blocks_and_threads.cu", "impl/CUDAAssertionsTest_multiple_writes_from_multiple_blocks.cu", "impl/CUDAAssertionsTest_multiple_writes_from_same_block.cu", ] def define_targets(rules): rules.cc_test( name = "test", srcs = [ "impl/CUDATest.cpp", ], deps = [ "com_google_googletest//:gtest_main", "//c10/cuda", ], target_compatible_with = rules.requires_cuda_enabled(), ) rules.cc_test( name = "test_my_cuda_tests", # nocommit srcs = [ "impl/CUDAAssertionsTest_1_var_test.cu", ], deps = [ "com_google_googletest//:gtest_main", "//c10/cuda", ], target_compatible_with = rules.requires_cuda_enabled(), ) for src in dsa_tests: name = src.replace("impl/", "").replace(".cu", "") rules.cuda_library( name = "test_" + name + "_lib", srcs = [ src, ], deps = [ "com_google_googletest//:gtest_main", "//c10/cuda", ], target_compatible_with = rules.requires_cuda_enabled(), ) rules.cc_test( name = "test_" + name, deps = [ ":test_" + name + "_lib", ], ) ``` Differential Revision: D42616147 fbshipit-source-id: f64a9ad115677504541212f7b8a618c334c8ccd3
Rebase to get past failures:
|
Summary: Pull Request resolved: pytorch#92640 `USE_CUDA` is needed in the bazel definitions to ensure that `USE_CUDA` is applied everywhere it should be. Test Plan: Sandcastle Change ``` fbcode/caffe2/c10/cuda/test/build.bzl ``` to ``` dsa_tests = [ "impl/CUDAAssertionsTest_1_var_test.cu", "impl/CUDAAssertionsTest_catches_stream.cu", "impl/CUDAAssertionsTest_catches_thread_and_block_and_device.cu", "impl/CUDAAssertionsTest_from_2_processes.cu", "impl/CUDAAssertionsTest_multiple_writes_from_blocks_and_threads.cu", "impl/CUDAAssertionsTest_multiple_writes_from_multiple_blocks.cu", "impl/CUDAAssertionsTest_multiple_writes_from_same_block.cu", ] def define_targets(rules): rules.cc_test( name = "test", srcs = [ "impl/CUDATest.cpp", ], deps = [ "com_google_googletest//:gtest_main", "//c10/cuda", ], target_compatible_with = rules.requires_cuda_enabled(), ) rules.cc_test( name = "test_my_cuda_tests", # nocommit srcs = [ "impl/CUDAAssertionsTest_1_var_test.cu", ], deps = [ "com_google_googletest//:gtest_main", "//c10/cuda", ], target_compatible_with = rules.requires_cuda_enabled(), ) for src in dsa_tests: name = src.replace("impl/", "").replace(".cu", "") rules.cuda_library( name = "test_" + name + "_lib", srcs = [ src, ], deps = [ "com_google_googletest//:gtest_main", "//c10/cuda", ], target_compatible_with = rules.requires_cuda_enabled(), ) rules.cc_test( name = "test_" + name, deps = [ ":test_" + name + "_lib", ], ) ``` Differential Revision: D42616147 fbshipit-source-id: c3cab120080a5b30a911bf25e2fdcff5f528668c
2b357d9
to
fd715bc
Compare
This pull request was exported from Phabricator. Differential Revision: D42616147 |
@pytorchbot merge (Initiating merge automatically since Phabricator Diff has merged) |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
Summary:
USE_CUDA
is needed in the bazel definitions to ensure thatUSE_CUDA
is applied everywhere it should be.We also fix some test code to use the correct properties.
Test Plan: Sandcastle
Differential Revision: D42616147