Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enable USE_CUDA #92640

Closed
wants to merge 1 commit into from
Closed

Enable USE_CUDA #92640

wants to merge 1 commit into from

Conversation

r-barnes
Copy link
Contributor

@r-barnes r-barnes commented Jan 19, 2023

Summary: USE_CUDA is needed in the bazel definitions to ensure that USE_CUDA is applied everywhere it should be.

We also fix some test code to use the correct properties.

Test Plan: Sandcastle

Differential Revision: D42616147

@pytorch-bot
Copy link

pytorch-bot bot commented Jan 19, 2023

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/92640

Note: Links to docs will display an error until the docs builds have been completed.

❌ 5 Failures

As of commit fd715bc:

NEW FAILURES - The following jobs have failed:

BROKEN TRUNK - The following jobs failed but were present on the merge base 438f12d:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D42616147

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D42616147

r-barnes added a commit to r-barnes/pytorch that referenced this pull request Jan 20, 2023
Summary:
Pull Request resolved: pytorch#92640

`USE_CUDA` is needed in the bazel definitions to ensure that `USE_CUDA` is applied everywhere it should be.

Test Plan: Sandcastle

Differential Revision: D42616147

fbshipit-source-id: 0d034f8bb2d3a9d30dc3ec41e10795eaa74073f0
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D42616147

r-barnes added a commit to r-barnes/pytorch that referenced this pull request Jan 23, 2023
Summary:
Pull Request resolved: pytorch#92640

`USE_CUDA` is needed in the bazel definitions to ensure that `USE_CUDA` is applied everywhere it should be.

Test Plan: Sandcastle

Differential Revision: D42616147

fbshipit-source-id: 1662e0fba3f07ab2ea61df711b90c74cf44f2bfa
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D42616147

r-barnes added a commit to r-barnes/pytorch that referenced this pull request Jan 23, 2023
Summary:
Pull Request resolved: pytorch#92640

`USE_CUDA` is needed in the bazel definitions to ensure that `USE_CUDA` is applied everywhere it should be.

Test Plan: Sandcastle

Differential Revision: D42616147

fbshipit-source-id: fe4274d51c6d80a92ae7de14813366a97ebdbff8
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D42616147

r-barnes added a commit to r-barnes/pytorch that referenced this pull request Jan 25, 2023
Summary:
Pull Request resolved: pytorch#92640

`USE_CUDA` is needed in the bazel definitions to ensure that `USE_CUDA` is applied everywhere it should be.

Test Plan: Sandcastle

Differential Revision: D42616147

fbshipit-source-id: 0baee98c3de9ce56bb2b05899a818659ffc8b076
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D42616147

r-barnes added a commit to r-barnes/pytorch that referenced this pull request Jan 27, 2023
Summary:
Pull Request resolved: pytorch#92640

`USE_CUDA` is needed in the bazel definitions to ensure that `USE_CUDA` is applied everywhere it should be.

Test Plan: Sandcastle

Differential Revision: D42616147

fbshipit-source-id: e0d3c36c3f521081a3b753940f1b700f60c4b949
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D42616147

r-barnes added a commit to r-barnes/pytorch that referenced this pull request Jan 30, 2023
Summary:
Pull Request resolved: pytorch#92640

`USE_CUDA` is needed in the bazel definitions to ensure that `USE_CUDA` is applied everywhere it should be.

Test Plan:
Sandcastle

Change
```
fbcode/caffe2/c10/cuda/test/build.bzl
```
to
```
dsa_tests = [
    "impl/CUDAAssertionsTest_1_var_test.cu",
    "impl/CUDAAssertionsTest_catches_stream.cu",
    "impl/CUDAAssertionsTest_catches_thread_and_block_and_device.cu",
    "impl/CUDAAssertionsTest_from_2_processes.cu",
    "impl/CUDAAssertionsTest_multiple_writes_from_blocks_and_threads.cu",
    "impl/CUDAAssertionsTest_multiple_writes_from_multiple_blocks.cu",
    "impl/CUDAAssertionsTest_multiple_writes_from_same_block.cu",
]

def define_targets(rules):
    rules.cc_test(
        name = "test",
        srcs = [
            "impl/CUDATest.cpp",
        ],
        deps = [
            "com_google_googletest//:gtest_main",
            "//c10/cuda",
        ],
        target_compatible_with = rules.requires_cuda_enabled(),
    )

    rules.cc_test(
        name = "test_my_cuda_tests",  # nocommit
        srcs = [
            "impl/CUDAAssertionsTest_1_var_test.cu",
        ],
        deps = [
            "com_google_googletest//:gtest_main",
            "//c10/cuda",
        ],
        target_compatible_with = rules.requires_cuda_enabled(),
    )

    for src in dsa_tests:
        name = src.replace("impl/", "").replace(".cu", "")
        rules.cuda_library(
            name = "test_" + name + "_lib",
            srcs = [
                src,
            ],
            deps = [
                "com_google_googletest//:gtest_main",
                "//c10/cuda",
            ],
            target_compatible_with = rules.requires_cuda_enabled(),
        )
        rules.cc_test(
            name = "test_" + name,
            deps = [
                ":test_" + name + "_lib",
            ],
        )

```

Differential Revision: D42616147

fbshipit-source-id: f64a9ad115677504541212f7b8a618c334c8ccd3
@r-barnes
Copy link
Contributor Author

Rebase to get past failures:

[pull / linux-bionic-py3_8-clang8-xla / test (xla, 1, 1, linux.4xlarge)](https://github.com/pytorch/pytorch/actions/runs/4046841519/jobs/6961554701)
[pull / linux-focal-py3.8-gcc7 / test (distributed, 2, 2, linux.2xlarge)](https://github.com/pytorch/pytorch/actions/runs/4046841519/jobs/6960424918)

Summary:
Pull Request resolved: pytorch#92640

`USE_CUDA` is needed in the bazel definitions to ensure that `USE_CUDA` is applied everywhere it should be.

Test Plan:
Sandcastle

Change
```
fbcode/caffe2/c10/cuda/test/build.bzl
```
to
```
dsa_tests = [
    "impl/CUDAAssertionsTest_1_var_test.cu",
    "impl/CUDAAssertionsTest_catches_stream.cu",
    "impl/CUDAAssertionsTest_catches_thread_and_block_and_device.cu",
    "impl/CUDAAssertionsTest_from_2_processes.cu",
    "impl/CUDAAssertionsTest_multiple_writes_from_blocks_and_threads.cu",
    "impl/CUDAAssertionsTest_multiple_writes_from_multiple_blocks.cu",
    "impl/CUDAAssertionsTest_multiple_writes_from_same_block.cu",
]

def define_targets(rules):
    rules.cc_test(
        name = "test",
        srcs = [
            "impl/CUDATest.cpp",
        ],
        deps = [
            "com_google_googletest//:gtest_main",
            "//c10/cuda",
        ],
        target_compatible_with = rules.requires_cuda_enabled(),
    )

    rules.cc_test(
        name = "test_my_cuda_tests",  # nocommit
        srcs = [
            "impl/CUDAAssertionsTest_1_var_test.cu",
        ],
        deps = [
            "com_google_googletest//:gtest_main",
            "//c10/cuda",
        ],
        target_compatible_with = rules.requires_cuda_enabled(),
    )

    for src in dsa_tests:
        name = src.replace("impl/", "").replace(".cu", "")
        rules.cuda_library(
            name = "test_" + name + "_lib",
            srcs = [
                src,
            ],
            deps = [
                "com_google_googletest//:gtest_main",
                "//c10/cuda",
            ],
            target_compatible_with = rules.requires_cuda_enabled(),
        )
        rules.cc_test(
            name = "test_" + name,
            deps = [
                ":test_" + name + "_lib",
            ],
        )

```

Differential Revision: D42616147

fbshipit-source-id: c3cab120080a5b30a911bf25e2fdcff5f528668c
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D42616147

@facebook-github-bot
Copy link
Contributor

@pytorchbot merge

(Initiating merge automatically since Phabricator Diff has merged)

@pytorch-bot pytorch-bot bot added the ciflow/trunk Trigger trunk jobs on your pull request label Feb 1, 2023
@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ciflow/trunk Trigger trunk jobs on your pull request fb-exported Merged
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants