Skip to content
Igor Karpov edited this page Apr 26, 2015 · 2 revisions

The Vision Environment

To run the demo,

  1. Download the Vision Environment for OpenNERO.
  2. Start the Vision Experiment.
  3. Click the First Person Agent button.
  4. Position your agent with the W, A, S, and D keys.
  5. When you have an object in sight, press the Snapshot button.

Each snapshot will automatically be processed by OpenNERO and the results will be displayed in a four-panel window. Note that the default version included in the demo is a faster, more efficient implementation of the canonical edge detection algorithm; it should complete its analysis in about 10 seconds and requires numpy and scipy to be installed. If you prefer to use the slower, more canonical implementation of the edge detection algorithm, or if you are not able to install numpy or scipy, you can checkout the cs343vision2 branch.

Downloading the Vision Environment

Follow the steps below to download and install the vision environment in OpenNERO.

Download the vision environment files. Extract the archive inside your OpenNERO installation folder.

2. Install the Python Imaging Library

We are going to be using the Python Imaging Library to help with loading, saving, and manipulating the images we take in OpenNERO. If you do not already have it installed, see the install instructions.

3. Install numpy and scipy

You will need to install numpy and scipy if you are not already able to import them.

Clone this wiki locally