-
Notifications
You must be signed in to change notification settings - Fork 74.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Tensorflow Still Trying to use CUDA even when Session Created with device_count={'GPU': 0} #9201
Comments
You want either |
@jart: I'm not sure why the config approach I outlined doesn't work and why the only suggestion is to set an env var. Setting the configuration as I did seems to partially work (ie prevent usage of the gpu for the graph) but not totally ie it locks device. This seems to violate "principle of least astonishment". It seems like this is either a documentation issue or an issue with how the config is used. The environmental var approach is not ideal as:
|
@jart Any thoughts on the above questions / comments? |
@zheng-xq Our friend @cancan101 believes it would be less astonishing for our users if |
I am experiencing the same issue with TF and I too believe |
I'm also have the same problem. Will be very happy if this could be supported. |
Why is this issue closed? |
It's not, but it really isn't a priority as you can (I know it's ugly)
If you want you could also wrap the whole thing in a decorator:
Someone might work on it one day but I wouldn't hold my breath |
@Belval hehe yeah that makes me feel like I want to take a shower. |
Hi @cancan101 ! 1.x issues are not supported any more. You can use tf.device to switch between CPU and GPU in 2.x versions. Thank you! |
This issue has been automatically marked as stale because it has no recent activity. It will be closed if no further activity occurs. Thank you. |
Closing as stale. Please reopen if you'd like to work on this further. |
Imported from GitHub PR openxla/xla#9201 This PR implements the following optimizations: ``` Gt(Max(a,b), a) -> Gt(b,a) Gt(Max(a,b), b) -> Gt(a,b) Gt(Min(a,b), a) -> False Gt(Min(a,b), b) -> False Gt(a, Min(a,b)) -> Gt(a,b) Gt(b, Min(a,b)) -> Gt(b,a) Gt(a, Max(a,b)) -> False Gt(b, Max(a,b)) -> False ``` We tested `Gt(Max(a,b), a) -> Gt(b,a)` optimization on Resnet50 model internally. Overall, we observed the following benefits of adding this optimization: - VmRSS usage is 14% less - Number of instructions - 13% less - Memory locations - 22% less Discussion: **Optimization for fold compare_GT(maximum(a,b), a) into compare_GT(b,a). #[8346](openxla/xla#8346 Copybara import of the project: -- 7695a3ff259d2174af46634a6e2276aabe295d05 by Alexander Pivovarov <pivovaa@amazon.com>: Simplify Gt(Max(a,b), a) -> Gt(b,a) Merging this change closes #9201 PiperOrigin-RevId: 605206447
Imported from GitHub PR openxla/xla#9201 PiperOrigin-RevId: 605598127
System Information
Using the
tensorflow/tensorflow:1.0.1-devel-gpu
Docker image.('v1.0.0-65-g4763edf-dirty', '1.0.1')
Host:
Driver Version: 367.57
,3.13.0-57-generic
Issue
If I
Set compute mode to EXCLUSIVE_PROCESS
on the Nvidia device (sudo nvidia-smi -c 1
), then even though I tell theSession
not to use GPUs (config=tf.ConfigProto(device_count={'GPU': 0})
), Tensorflow attempts to use the GPU resulting in an inability to create session:This can be demonstrated by running:
when another process is using CUDA and the exclusive process mode is set.
If exclusive process mode is not set, then the session is created but using
nvidia-smi
, I see that the process is using GPU ram (and CUDA):The issue seems limited to TF trying to lock the CUDA device (an allocate ~61MB memory). Subsequent computations do happen correctly on the CPU.
The text was updated successfully, but these errors were encountered: