Skip to content

Latest commit

 

History

History
46 lines (37 loc) · 1.75 KB

README.md

File metadata and controls

46 lines (37 loc) · 1.75 KB

Disclaimer : This work is base on EmbarsArc's gym rocket lander. I have only changed parts to be able to tweak the difficulty.
SOURCE : https://github.com/EmbersArc/gym_rocketLander

Original Readme :
Since this seems to be getting some more attention I've updated it to the latest version of Gym. I hope everything still works, let me know if it doesn't. Pull requests are always welcome. Happy training!

GIF

Click here for higher quality video

RL algorithm and working model (likely outdated): https://github.com/EmbersArc/PPO

/envs/box2d/rocket_lander.py is the only important file, along with some small changes to init files, see https://github.com/openai/gym#environments

The objective of this environment is to land a rocket on a ship. The environment is highly customizable and takes discrete or continuous inputs.

STATE VARIABLES

The state consists of the following variables:

  • x position
  • y position
  • angle
  • first leg ground contact indicator
  • second leg ground contact indicator
  • throttle
  • engine gimbal

If VEL_STATE is set to true, the velocities are included:

  • x velocity
  • y velocity
  • angular velocity

All state variables are normalized for improved training.

CONTROL INPUTS

Discrete control inputs are:

  • gimbal left
  • gimbal right
  • throttle up
  • throttle down
  • use first control thruster
  • use second control thruster
  • no action

Continuous control inputs are:

  • gimbal (left/right)
  • throttle (up/down)
  • control thruster (left/right)