forked from tensorflow/tensorflow
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Workaround for the duplicate dependency bazel error.
For cases when we have the same dependency being specified by both CUDA and ROCm, using `if_cuda` / `if_rocm` to specify that dependency leads to a "duplicate dependency" bazel error. Switch `if_cuda` / `if_rocm` to `if_cuda_is_configured` / `if_rocm_is_configured` is not an acceptable solution, because the `*_is_configured` functions are being phased out. The preferred solution here would be a new `if_gpu(cuda_arg, rocm_arg, default_arg)` function. That (or something alon those lines) solution is in the works. This workaround is meant to be bandage that allows progress to be made, while we wait for the real solution. This workaround introduces a `if_cuda_or_rocm` function which should be used to specify dependencies that are common to both CUDA and ROCM. While this solution works, it is less than ideal because it needs to duplicate the logic inside the `if_cuda` / `if_rocm` functions.
- Loading branch information
Showing
2 changed files
with
46 additions
and
12 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters