New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unsafe use of Tensorflow and Caffe on same DeepDetect server / GPU #230

beniz opened this Issue Jan 3, 2017 · 3 comments


None yet
3 participants

beniz commented Jan 3, 2017

Tensorflow running call conflicts on Cuda initialization context with that of other libraries, see:

Tensorflow devs indicate that there's no solving of this in the roadmap.

The StreamExecutor context issue is confirmed with DeepDetect when using Tensorflow and Caffe services on the same GPU. More qualification expected in the future.

Current solution: build Tensorflow backend to support CPU only (i.e. with no GPU support built-in).


This comment has been minimized.

abhiguru commented Jan 9, 2017

Are there performance optimisations for CPU Only ? Intel or ARM, I see a spike on all 8 cores when predicting


This comment has been minimized.


beniz commented Jan 9, 2017

Caffe and TF backend are optimized for CPU, with parallel operations using all cores. It is always best to use batches.


This comment has been minimized.

byronyi commented Mar 17, 2017

I don't think sharing 1 GPU between multiple applications is generally a good idea. Given TF takes control of the device global memory allocation, and all other resources that requires exclusive access, it is better to share your GPU in some other ways, for example, time-sharing :P

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment