OpenaAI Gym robot reaching environment with PyBullet.
This a (sort-of) port of the fetch_reach_v1 environment featured in this article which I wanted to try for my RL experiments but I did not have a Mujoco license.
Anyway, here is the env in action, agent was trained with PPO.
Agent performance at different training episodes | ||
---|---|---|
Install with pip
:
git clone https://github.com/mcarfagno/gym-panda-reach
cd gym-panda-reach
pip install .
Example running of the environment:
import gym
import gym_panda_reach
env = gym.make('panda-reach-v0')
env.reset()
env.reward_type = "sparse" #default is "dense"
for _ in range(100):
env.render()
obs, reward, done, info = env.step(
env.action_space.sample()) # take a random action
env.close()
References and Special Thanks:
- Mahyar Abdeetedal -> awesome tutorial and inspiration
- OpenAI -> original environment
- PyBullet -> my favourite robotics simulator, sorry gazebo