Skip to content

A deep reinforcement-learning agent for a double-jointed robotic arm.

Notifications You must be signed in to change notification settings

cptanalatriste/luke-the-reacher

Repository files navigation

luke-the-reacher

A deep reinforcement-learning agent for a double-jointed robotic arm, trained using the Deep Deterministic Policy Gradient (DDPG) algorithm.

Project Details:

Luke-the-reacher is a deep-reinforcement learning agent designed for the Reacher environment from the Unity ML-Agents Toolkit.

The Reacher environment

The state is represented via a vector of 33 elements. They correspond to the position, rotation, velocity, and angular velocities of the double-jointed arm. The agent's actions are composed by vectors of 4 real-valued elements between -1 and 1. These values represent the torque to apply to its two joints.

The agent is rewarded with +0.1 points every time step the arm is in contact with the target. We consider our agent has mastered the task when he reaches an average score of 30, over 100 episodes.

Getting Started

Before running your agent, be sure to accomplish this first:

  1. Clone this repository.
  2. Download the reacher environment appropriate to your operating system (available here ). Be sure to select the file corresponding to Version 1: One(1) Agent.
  3. Place the environment file in the cloned repository folder.
  4. Setup an appropriate Python environment. Instructions available [here.] (https://github.com/udacity/deep-reinforcement-learning)

Instructions

You can start running and training the agent by exploring Navigation.ipynb. Also available in the repository:

  • luke_reacher.py contains the agent code.
  • reacher_manager.py has the code for training the agent.

About

A deep reinforcement-learning agent for a double-jointed robotic arm.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published