-
Notifications
You must be signed in to change notification settings - Fork 51
VisionMod
![](OpenNERO-vision.png)
To run the demo,
- Download the Vision Environment for OpenNERO.
- Start the Vision Experiment.
- Click the First Person Agent button.
- Position your agent with the W, A, S, and D keys.
- When you have an object in sight, press the Snapshot button.
Each snapshot will automatically be processed by OpenNERO and the results will be displayed in a four-panel window. Note that the default version included in the demo is a faster, more efficient implementation of the canonical edge detection algorithm; it should complete its analysis in about 10 seconds and requires numpy
and scipy
to be installed. If you prefer to use the slower, more canonical implementation of the edge detection algorithm, or if you are not able to install numpy
or scipy
, you can checkout the cs343vision2 branch.
Follow the steps below to download and install the vision environment in OpenNERO.
Download the vision environment files. Extract the archive inside your OpenNERO installation folder.
We are going to be using the Python Imaging Library to help with loading, saving, and manipulating the images we take in OpenNERO. If you do not already have it installed, see the install instructions.
You will need to install numpy and scipy if you are not already able to import them.