Skip to content
Explore OpenAI Gym environments
Branch: master
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
gym_demo
tests
.gitignore
.pre-commit-config.yaml
.travis.yml
LICENSE
MANIFEST.in
README.md
gym-demo-screenshot.png
requirements.txt
requirements_test.txt
setup.cfg
setup.py
tox.ini

README.md

Build Status Coverage Status Codacy Badge PyPI version

gym-demo

Explore OpenAI Gym environments

This package provides the gym-demo command, which allows you to explore the various Open AI gym environments installed on your system.

This allows you to get a quick overview of an environment before you start working on it. You will get information about the environments observation space, action space as well as the rewards you can expect to get and other available information.

gym-demo on YouTube

Installation

You can install OpenAI Gym and gym-demo using pip:

$ pip install gym[atari]
$ pip install gym-demo

Usage

Use gym-demo --help to display usage information and a list of environments installed in your Gym.

$ gym-demo --help

Start a demo of an environment to get information about its observation and action space and observe the rewards an agent gets during a random run.

$ gym-demo SpaceInvaders-ram-v4
Environment: SpaceInvaders-ram-v4

Observation Space: Box(128,)
Low values:
[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
High values:
[255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255
 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255
 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255
 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255
 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255
 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255
 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255
 255 255]

Action Space: Discrete(6)
Action meanings: ['NOOP', 'FIRE', 'RIGHT', 'LEFT', 'RIGHTFIRE', 'LEFTFIRE']


Running environment demonstration...
Unique environment information is output to standard out:
Reward: 0.0, Done: False, Info: {'ale.lives': 3}
Reward: 5.0, Done: False, Info: {'ale.lives': 3}
Reward: 0.0, Done: False, Info: {'ale.lives': 3}
Reward: 10.0, Done: False, Info: {'ale.lives': 3}
Reward: 0.0, Done: False, Info: {'ale.lives': 3}
Reward: 15.0, Done: False, Info: {'ale.lives': 3}
Reward: 0.0, Done: False, Info: {'ale.lives': 3}
Reward: 20.0, Done: False, Info: {'ale.lives': 3}
Reward: 0.0, Done: False, Info: {'ale.lives': 3}
Reward: 25.0, Done: False, Info: {'ale.lives': 3}
Reward: 0.0, Done: False, Info: {'ale.lives': 3}
Reward: 0.0, Done: False, Info: {'ale.lives': 2}
Reward: 30.0, Done: False, Info: {'ale.lives': 2}
Reward: 0.0, Done: False, Info: {'ale.lives': 2}
Reward: 0.0, Done: False, Info: {'ale.lives': 1}
Reward: 5.0, Done: False, Info: {'ale.lives': 1}
Reward: 0.0, Done: False, Info: {'ale.lives': 1}
Reward: 10.0, Done: False, Info: {'ale.lives': 1}
Reward: 0.0, Done: False, Info: {'ale.lives': 1}
Reward: 15.0, Done: False, Info: {'ale.lives': 1}
Reward: 0.0, Done: False, Info: {'ale.lives': 1}
Reward: 20.0, Done: False, Info: {'ale.lives': 1}
Reward: 0.0, Done: False, Info: {'ale.lives': 1}
Reward: 0.0, Done: True, Info: {'ale.lives': 0}

Get mode information about Open AI gym on their documentation website.

You can’t perform that action at this time.