diff --git a/tensorflow/tools/docker/README.md b/tensorflow/tools/docker/README.md index c9975b7ce8216e..e94b11e4f2fd8f 100644 --- a/tensorflow/tools/docker/README.md +++ b/tensorflow/tools/docker/README.md @@ -31,7 +31,7 @@ We currently maintain three Docker container images: Each of the containers is published to a Docker registry; for the non-GPU containers, running is as simple as - $ docker run -it b.gcr.io/tensorflow/tensorflow + $ docker run -it -p 8888:8888 b.gcr.io/tensorflow/tensorflow For the container with GPU support, we require the user to make the appropriate NVidia libraries available on their system, as well as providing mappings so @@ -40,7 +40,7 @@ accomplished via $ export CUDA_SO=$(\ls /usr/lib/x86_64-linux-gnu/libcuda* | xargs -I{} echo '-v {}:{}') $ export DEVICES=$(\ls /dev/nvidia* | xargs -I{} echo '--device {}:{}') - $ docker run -it $CUDA_SO $DEVICES b.gcr.io/tensorflow/tensorflow-devel-gpu + $ docker run -it -p 8888:8888 $CUDA_SO $DEVICES b.gcr.io/tensorflow/tensorflow-devel-gpu Alternately, you can use the `docker_run_gpu.sh` script in this directory. diff --git a/tensorflow/tools/docker/docker_run_gpu.sh b/tensorflow/tools/docker/docker_run_gpu.sh index badc98f660473a..d5fe213952196c 100755 --- a/tensorflow/tools/docker/docker_run_gpu.sh +++ b/tensorflow/tools/docker/docker_run_gpu.sh @@ -34,4 +34,4 @@ if [[ "${DEVICES}" = "" ]]; then exit 1 fi -docker run -it $CUDA_SO $DEVICES b.gcr.io/tensorflow/tensorflow-full-gpu "$@" +docker run -it -p 8888:8888 $CUDA_SO $DEVICES b.gcr.io/tensorflow/tensorflow-full-gpu "$@"