Skip to content

On Simple Reactive Neural Networks for Behaviour-Based Reinforcement Learning by Ameya Pore and Gerardo Aragon-Camarasa

Notifications You must be signed in to change notification settings

cvas-ug/simple-reactive-nn

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

27 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

On Simple Reactive Neural Networks for Behaviour-Based Reinforcement Learning

Ameya Pore and Gerardo Aragon-Camarasa

Introduction

A framework to generate reactive behaviours, motivated from Subsumption architecture from 1980s. In our archihtecture each behaviour is represented as a separate module (Deep network), having direct access to processed sensory information. Each module has an individual specific goal. We use a trivial form of imitation learning, called Behaviour cloning, to train these distinct behaviour layers.

The primary goal of picking the object is subdivided into simpler subtasks. For each subtask, there are reactive networks which are trained specially for movement in x, y and z. First, the state vector (in the form of coordinates of objects) is given as an input to the Feature extraction layer. The extracted features are relayed on to the reactive layers deciding the movement of the end-effector. To simplify terminology, we use the following corresponding letters for denoting the subordinate actions: Approach (a), manipulate (b) and retract (c).

Implementation details

Prerequisites

  • Python3.5+
  • PyTorch 1.3.1
  • OpenAI Gym ==0.10.8
  • mujoco physics engine Here, we use the OpenAI simulator FetchPickandPlace which provides kinematic state vector as an input to our network. For installing OpenAI fetch simulator: Refer to Fetch

Since, we are adding an additional degree of freedom (i.e rotation of the end-effector while grasping), that is not provided by the fetchpickandplace environment, we need to change the files associated with the fetchpickandplace environment. Replace the robotics folder in gym/envs/robotics by the one in this repository. Tehn, you are all set!

Clone the repository

git clone https://github.com/Ameyapores/Reactive-Reinforcement-learning
cd Reactive-Reinforcement-learning

Step 1: Train on approach

Create a folder, named 'train' in the approach directory where the weights would be saved.

cd approach
python main.py

After 2500 episodes of training, stop (Ctrl+c)

Step 2: Train on manipulate

Transfer the saved weights from the approach folder to the folder where manipulate weights would be saved. We have a created a folder named 'train' in the manipulate directory.

cd manipulate
python main.py

After 2000 episodes of training, stop.

Step 2: Train on retract

Transfer the saved weights from the manipulate folder to the folder where retarct weights would be saved. We have created a folder named 'train' in the retract directory.

cd retract
python main.py

After 3000 episodes of training, stop.

Step 2: Train of chereographing the actions using LSTM

Transfer the saved weights from the retract folder to a new folder in the LSTM directory (named weights). Also, create a seperate folder named 'train' to save the weights of the Actor-critic. Transfer the saved weights from the retract folder to this 'train' folder.

cd LSTM
python main3.py

After 2000 episodes of training, stop.

Results

Comparison of end-to-end learning vs proposed reactive Reinforcement architecture.

About

On Simple Reactive Neural Networks for Behaviour-Based Reinforcement Learning by Ameya Pore and Gerardo Aragon-Camarasa

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages