cb048b3 Jan 8, 2017
@ZuzooVn @jeremyschlatter @gdb @karpathy
249 lines (177 sloc) 9.86 KB


Since the remote part of the environment runs in its own server process, managing remotes is an important task. The remote can run anywhere - locally, or in the cloud. This section will explain three ways to set up remotes.

Docker installation

The majority of the remotes for Universe environments run inside Docker containers, so the first step to running your own remotes is to install Docker (on OSX, we recommend Docker for Mac). You should be able to run docker ps and get something like this:

$ docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES

How to start a remote

There are currently three ways to start a remote:

  • Create an automatic local remote using env.configure(remotes=1). In this case, universe automatically creates a remote locally by spinning up a docker container for you.
  • Create a manual remote by spinning up your own Docker container, locally or on a server you control.
  • Create a starter cluster in AWS, which will automatically provide you with cloud-hosted remotes.

Automatic local remotes

To create an automatic local remote, call env.configure(remotes=1) (or 4 if you'd like 4 remotes). This will download the docker container and start 1 copy of it locally.

import gym
import universe # register the universe environments

env = gym.make('flashgames.DuskDrive-v0')
env.configure(remotes=1) # downloads and starts a flashgames runtime
observation_n = env.reset()

while True:
        action_n = [[('KeyEvent', 'ArrowUp', True)] for ob in observation_n] # your agent here
        observation_n, reward_n, done_n, info = env.step(action_n)

Manual remotes

To create a manual remote, start the remote Docker container manually on the command line. Remotes can run locally on the same machine as the client, or you can start them on servers you control.

To find the appropriate Docker command-line invocation for each environment, you can look at where we register the runtime for each environment. The command is also printed out conveniently when you run with remotes=1:

[2016-11-25 23:51:04,223] [0] Creating container: Run the same thing by hand as:
    docker run -p 10000:5900 -p 10001:15900 --cap-add NET_ADMIN --cap-add SYS_ADMIN
    --ipc host
Once you have started the docker container, configure your agent to
connect to the VNC server (port 5900 by default) and the reward/info channel (port 15900 by default):

To connect manually to multiple remotes, separate them by commas:


If your docker container is running on a server rather than on localhost, just plug in the appropriate URL or IP address:


VNC compression settings

The VNC connection supports multiple compression settings that control the tradeoff between a fast but highly compressed, low quality data stream and slow, uncompressed data stream. These can be configured by using the vnc_kwargs argument to env.configure. The default arguments are:

env.configure(vnc_kwargs={'encoding':'tight', 'fine_quality_level':50, 'subsample_level':2})

Here, tight is a lossy encoding that uses JPEG for compression. We also support zrle instead, which is lossless. The fine_quality_level controls the compression strength from high compression / low quality (0) to low compression / high quality (100). For subsample_level, 0 is highest quality, 2 is low quality and 3 is greyscale. You can investigate the effects of many of these options on the visual fidelity by connecting to an environment using TurboVNC, which allows you to tune these settings in the user interface.

Note that the codecs always operate on deltas of the screen, so if large portions of your screen are not changing then you might be able to afford higher quality settings. Conversely, if you're playing a racing game that takes up a large portion of the screen you should be more worried about bandwidth. The call to step is asynchronous with respect to new frames arriving, so if the connection is too slow the environments will lag.

Automatic cloud-hosted remotes: starter cluster

If you have an AWS account, you can spin up a starter Docker cluster to host your own remotes. First click the "Launch Stack" button and follow the steps on the AWS console to deploy your cluster.

Once your stack on AWS is ready, run starter-cluster to start your environments

$ example/starter-cluster/starter-cluster start -s [stack-name] -i [path-to-ssh-key] \
    --runtime [universe-runtime] -n [number-of-envs]

or example, the follow will start two flashgames remotes:

$ pip install -r bin/starter-cluster-requirements.txt
$ bin/starter-cluster -v start -s OpenAI-Universe -i my-ec2-key.pem -r flashgames -n 2
Creating network "flashgames_default" with the default driver
Pulling flashgames-0 (
ip-172-33-1-4: Pulling : downloaded
ip-172-33-28-242: Pulling : downloaded
Creating flashgames_flashgames-0_1
Creating flashgames_flashgames-1_1
Environments started.

Now you can pass the IP address and ports for your remotes to your agent, as was described in the previous section on manual remotes. For example:

$ python bin/ -e flashgames.DuskDrive-v0 -r vnc://,

Running bin/starter-cluster start again will restart your remotes. To stop them, run:

$ bin/starter-cluster stop -s OpenAI-Universe -i my-ec2-key.pem -r flashgames
Stopping flashgames_flashgames-1_1 ... done
Stopping flashgames_flashgames-0_1 ... done
Removing flashgames_flashgames-1_1 ... done
Removing flashgames_flashgames-0_1 ... done
Removing network flashgames_default
Environments stopped.


By default, starter cluster remotes are spawned in AWS's us-west-2 region. In our experience, the latencies of training over the public internet are acceptable, but if you have trouble, it may make sense to try running your agent code on an AWS server in the same region as the remote.

Scaling Up

If you encounter the following

$ bin/starter-cluster -v start -s OpenAI-Universe -i my-ec2-key.pem -r flashgames   -n 2
  Creating network "flashgames_default" with the default driver
  Pulling flashgames-0 (
  ip-172-33-1-4: Pulling : downloaded
  ip-172-33-28-242: Pulling :   downloaded
  ip-172-33-9-51: Pulling :   downloaded
  ip-172-33-27-141: Pulling :   downloaded
  Creating flashgames_flashgames-2_1
  Creating flashgames_flashgames-3_1
  Creating flashgames_flashgames-0_1
  Creating flashgames_flashgames-1_1
  Creating flashgames_flashgames-4_1

  ERROR: for flashgames-0  no resources available to schedule container

then it means you've run out of computing resources on your cluster, and have to add more worker nodes. You can do so by going to the AWS Cloudformation console:

  1. Select your stack
  2. Click "Update Stack" in the "Actions" dropdown
  3. Hit "Next" on the "Select Template" page
  4. Input the new swarm size and hit "Next"
  5. Hit "Next" on the "Options" page
  6. Hit "Update" on the "Review" page

Reusing remotes

If a consistent client_id is supplied to configure(), then the client will attempt to reuse the same remote for the new environment rather than spinning up a new one each time.

Switching between environments in the same runtime (i.e. environments that run on the same underlying docker container) is possible without creating a new remote; however, if you want to switch to an environment in a different runtime, you will need to create a new remote. For example, you can switch between flashgames.DuskDrive-v0 and flashgames.NeonRace-v0 without starting a new remote, because they both run in the flashgames runtime, but if you want to switch to you cannot re-use the same remote.

The configuration for the runtimes is defined in universe/runtimes/, and the specific version number tags for the corresponding Docker images are specified in runtimes.yml.