Skip to content

Commit

Permalink
merge two dockerfiles
Browse files Browse the repository at this point in the history
  • Loading branch information
fxia22 committed Oct 1, 2018
1 parent 192ee5b commit 69bbffe
Show file tree
Hide file tree
Showing 3 changed files with 17 additions and 148 deletions.
2 changes: 1 addition & 1 deletion Dockerfile
Expand Up @@ -3,7 +3,7 @@
# docker build -t gibson .
# docker run --runtime=nvidia -ti --rm -e DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix gibson

FROM nvidia/cudagl:9.0-base-ubuntu16.04
FROM nvidia/cudagl:9.0-runtime-ubuntu16.04

RUN apt-get update && apt-get install -y --no-install-recommends \
cuda-samples-$CUDA_PKG_VERSION && \
Expand Down
130 changes: 0 additions & 130 deletions Dockerfile_server

This file was deleted.

33 changes: 16 additions & 17 deletions README.md
Expand Up @@ -86,9 +86,15 @@ We use docker to distribute our software, you need to install [docker](https://d

Run `docker run --runtime=nvidia --rm nvidia/cuda nvidia-smi` to verify your installation.

You can either 1. build your own docker image or 2. pull from our docker image.
You can either 1. pull from our docker image or 2. build your own docker image.

1. Build your own docker image (recommended)

1. Pull from our docker image (recommended)
```bash
docker pull xf1280/gibson:0.3.1
```

2. Build your own docker image
```bash
git clone https://github.com/StanfordVL/GibsonEnv.git
cd GibsonEnv
Expand All @@ -99,27 +105,20 @@ If the installation is successful, you should be able to run `docker run --runti
dataset files in docker image to keep our image slim, so you will need to mount it to the container when you start a container.


2. Or pull from our docker image
```bash
docker pull xf1280/gibson:0.3.1
```
#### Notes on deployment on a headless server

We have another docker file that supports deployment on a headless server and remote access with TurboVNC+virtualGL.
You can build your own docker image with the docker file `Dockerfile_server`.
Gibson Env supports deployment on a headless server and remote access with `x11vnc`.
You can build your own docker image with the docker file `Dockerfile` as above.
Instructions to run gibson on a headless server (requires X server running):

1. Install nvidia-docker2 dependencies following the starter guide.
2. Use `openssl req -new -x509 -days 365 -nodes -out self.pem -keyout self.pem` create `self.pem` file
3. `docker build -f Dockerfile_server -t gibson_server .` use the `Dockerfile_server` to build a new docker image that support virtualgl and turbovnc
4. `docker run --runtime=nvidia -ti --rm -e DISPLAY -v /tmp/.X11-unix/X0:/tmp/.X11-unix/X0 -v <host path to dataset folder>:/root/mount/gibson/gibson/assets/dataset -p 5901:5901 gibson_server`
in docker terminal, start `/opt/websockify/run 5901 --web=/opt/noVNC --wrap-mode=ignore -- vncserver :1 -securitytypes otp -otp -noxstartup` in background, potentially with `tmux`
5. Run gibson with `DISPLAY=:1 vglrun python <gibson example or training>`
6. Visit your `host:5901` and type in one time password to see the GUI.
1. Install nvidia-docker2 dependencies following the starter guide. Install `x11vnc` with `sudo apt-get install x11vnc`.
2. Have xserver running on your host machine, and run `x11vnc` on DISPLAY :0.
2. `docker run --runtime=nvidia -ti --rm -e DISPLAY -v /tmp/.X11-unix/X0:/tmp/.X11-unix/X0 -v <host path to dataset folder>:/root/mount/gibson/gibson/assets/dataset <gibson image name>`
5. Run gibson with `python <gibson example or training>` inside docker.
6. Visit your `host:5900` and you should be able to see the GUI.

If you don't have X server running, you can still run gibson, see [this guide](https://github.com/StanfordVL/GibsonEnv/wiki/Running-GibsonEnv-on-headless-server) for more details.


B. Building from source
-----
If you don't want to use our docker image, you can also install gibson locally. This will require some dependencies to be installed.
Expand Down Expand Up @@ -168,7 +167,7 @@ Uninstall gibson is easy. If you installed with docker, just run `docker images
Quick Start
=================

First run `xhost +local:root` on your host machine to enable display. You may need to run `export DISPLAY=:0.0` first. After getting into the docker container with `docker run --runtime=nvidia -ti --rm -e DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix -v <host path to dataset folder>:/root/mount/gibson/gibson/assets/dataset gibson`, you will get an interactive shell. Now you can run a few demos.
First run `xhost +local:root` on your host machine to enable display. You may need to run `export DISPLAY=:0` first. After getting into the docker container with `docker run --runtime=nvidia -ti --rm -e DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix -v <host path to dataset folder>:/root/mount/gibson/gibson/assets/dataset gibson`, you will get an interactive shell. Now you can run a few demos.

If you installed from source, you can run those directly using the following commands without using docker.

Expand Down

0 comments on commit 69bbffe

Please sign in to comment.