Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve description for tf.test.is_gpu_available #27566

Merged
merged 2 commits into from Apr 22, 2019
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
8 changes: 6 additions & 2 deletions tensorflow/python/framework/test_util.py
Expand Up @@ -1331,13 +1331,17 @@ def decorated(self, *args, **kwargs):
def is_gpu_available(cuda_only=False, min_cuda_compute_capability=None):
"""Returns whether TensorFlow can access a GPU.

Warning: if not GPU version of the package is installed, the function would
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"non-GPU" sounds better than "not GPU" to me.

also, return False. Use `tf.test.is_built_with_cuda` to validate if TensorFlow
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Redundant comma here "also, return False"

was build with CUDA support.

Args:
cuda_only: limit the search to CUDA gpus.
cuda_only: limit the search to CUDA GPUs.
min_cuda_compute_capability: a (major,minor) pair that indicates the minimum
CUDA compute capability required, or None if no requirement.

Returns:
True if a gpu device of the requested kind is available.
True if a GPU device of the requested kind is available.
"""

def compute_capability_from_device_desc(device_desc):
Expand Down