Skip to content

Trains an agent with Twin Delayed Deep Deterministic Policy Gradient (TD3) to solve the Bipedal Walker challenge from OpenAI

License

Notifications You must be signed in to change notification settings

hmomin/TD3-Bipedal-Walker

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

27 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

bipedal-walker-logo

Introduction

This script trains an agent with Twin Delayed DDPG (TD3) to solve the Bipedal Walker challenge from OpenAI.

In order to run this script, NumPy, the OpenAI Gymnasium toolkit, and PyTorch will need to be installed.

Each step through the Bipedal Walker environment takes the general form:

state, reward, terminated, truncated, info = env.step(action)

and the goal is for the agent to take actions that maximize the cumulative reward achieved for the episode's duration. In this specific environment, the state and action space are continuous and the state space is 8-dimensional while the action space is 4-dimensional. The state space consists of position and velocity measurements of the walker and its joints, while the action space consists of motor torques that can be applied to the four controllable joints.

Since the action space is continuous, a naive application of vanilla policy gradient would likely have relatively poor or limited performance in practice. In environments involving continuous action spaces, it is often preferable to make use of DDPG or TD3 among other deep reinforcement learning (DRL) algorithms, since these two have been specifically designed to make use of continuous action spaces.

To learn more about how the agent receives rewards, see here.

Algorithm

A detailed discussion of the TD3 algorithm with proper equation typesetting is provided in the supplemental material here.

Results

Solving the Bipedal Walker challenge requires training the agent to safely walk all the way to the end of the platform without falling over and while using as little motor torque as possible. The agent's ability to do this was quite abysmal in the beginning.

failure...'

After training the agent overnight on a GPU, it could gracefully complete the challenge with ease!

success!

Below, the average performance over 64 trial runs is documented. The shaded region represents a standard deviation of the average evaluation over all trials.

training-results

Additionally, the relative frequency among all trials of individual episode reward values is shown below through time.

Bipedal-Walker-Histograms.mp4

References

License

All files in the repository are under the MIT license.

About

Trains an agent with Twin Delayed Deep Deterministic Policy Gradient (TD3) to solve the Bipedal Walker challenge from OpenAI

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Languages