You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Jan 22, 2024. It is now read-only.
When I use "CUDA Multi-Process Service" aka MPS in nvidia-docker environment, I met a couple of issues. So I'm wonder if MPS is supported in nvidia-docker? Please help me, thanks in advance~
Here is problems I have met:
When I run nvidia-cuda-mps-control -d to start mps daemon in Nvidia-docker, I can't see this process from nvidia-smi, however, I can see this process from host machine.
In comparison, when I run the same command, nvidia-cuda-mps-control -d, in Host machine (physical server), I got see this from nvidia-smi. (need run a gpu program first to start MPS server)
I tried to run caffe training with MPS as a example, 2 training process at the same time in Nvidia-docker env. It showed: F0703 13:39:15.539633 97 common.cpp:165] Check failed: error == cudaSuccess (46 vs. 0) all CUDA-capable devices are busy or unavailable
In comparison, this works ok in host (physical machine).
I'm trying this on P100 GPU, Ubuntu14,
$ nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2016 NVIDIA Corporation
Built on Tue_Jan_10_13:22:03_CST_2017
Cuda compilation tools, release 8.0, V8.0.61
Docker version 17.04.0-ce, build 4845c56
I hope this is the right place to ask, thanks again.
marco-lavagnino, CodeJuan, CharlesJQuarra, thomasjungblut, maingoh and 7 more