Skip to content

This is the gym env for UR5 robot and FetchPush task + HER training

License

Notifications You must be signed in to change notification settings

nikisim/UR5_FetchPush_env

 
 

Repository files navigation

UR5_FetchPush env

UR5_FetchPush UR5_FetchReach-real
UR5_FetchPush_sim UR5_FetchReach_real

UR5 FetchPush Gym Environment

This repository contains a custom OpenAI Gym-compatible environment for simulating a robotic manipulation task using the UR5 robotic arm. The task, named "FetchPush," involves the UR5 robot pushing an object to a target location on a flat surface. This environment is designed for research and development in the field of reinforcement learning and robotics.

Environment Description

In the FetchPush task, the UR5 robot is equipped with a two-finger gripper and is tasked with pushing a puck to a specified goal location. The environment provides a realistic simulation of the robot's dynamics and the interaction with the object.

Key features of the environment include:

Realistic UR5 robot arm simulation with a two-finger gripper. (Thanks to ElectronicElephant for meshes and visual)

  • A puck that the robot must push to the goal.
  • Observation space that includes the position and velocity of the robot's joints, the position of the puck, and the target goal position.
  • Reward function that encourages the robot to push the puck as close to the goal as possible.
  • Configurable initial conditions for the robot's arm and the puck's position.

TODO List

  • Proper Wandb support
  • Add plots and demo
  • Collect datasets for offline RL methods

Installation

To install the UR5 FetchPush Gym Environment, follow these steps:

git clone https://github.com/nikisim/UR5_FetchPush_env.git
pip install -e .

Usage

To use the UR5 FetchPush environment, you can create an instance of the environment and interact with it as you would with any other Gym environment:

import gym
import gym_UR5_FetchPush


env = gym.make('gym_UR5_FetchPush/UR5_FetchPushEnv-v0', render=True)

# Reset the environment
observation = env.reset()

# Sample a random action
action = env.action_space.sample()

# Step the environment
observation, reward, done, info = env.step(action)

Dependencies

This environment requires the following dependencies:

  • gym
  • numpy
  • pybullet (for physics simulation)

Make sure to install these dependencies before using the environment.

Instruction to train DDPG+HER for UR5_FetchPush

If you want to use GPU, just add the flag --cuda (Not Recommended, Better Use CPU).

mpirun -np 16 python -u train_UR5.py --num-workers 12 --n-epochs 800 --save-dir saved_models/UR5_FetcReach 2>&1 | tee reach_UR5.log

Check arguments.py for more info about flags and options

Current success rate: 0.8191489361702128

Play Demo

python demo.py --demo-length 10

Collect dataset for offline RL

To collect dataset in D4RL format using pretrained DDPG+HER. By default it will collect >800.000 transitions with 'observations', 'actions', 'rewards', 'next_observations', 'terminals'

python create_dataset.py

Results

Training Performance

It was plotted by using 1000 epochs.

UR5_FetchReach_results

UR5_FetchPush_results

About

This is the gym env for UR5 robot and FetchPush task + HER training

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%