This is a github project containing multiple simulation environments for robotics. These environments can be used to test different reinforcement learning algorithms.
The environments are modified from the examples provided in Bullet Physics SDK (see here for more examples).
-
Install OpenAI Gym (see here for the installation and more information)
-
Install PyBullet (see here for the installation and more information)
- Clone the repository.
git clone https://github.com/SarahChiu/Robotics_Env_in_PyBullet.git
cd Robotics_Env_in_PyBullet/src
- Install the package.
pip install -e .
- Add the following lines to your bashrc.
export PYTHONPATH=$PYTHONPATH:your_path_to_this_project/src
export URDF_DATA=your_path_to_this_project/src/data
Here is an example code for running the simulation environment.
import numpy as np
from kuka.kukaContiGraspEnv import KukaContiGraspEnv as grasp
#Set renders to true if you want to show the UI
env = grasp(renders=True)
#To get an initial observation, you can try one of the following functions
ob = env.reset()
ob, _ = env.getGoodInitState()
ob = env.getMidInitState()
#Run an episode and output the episode reward
ep_r = 0.0
while True:
a = np.random.normal(0.0, 0.02, size=env.action_space.shape[0])
ob, r, t, _ = env.step(a)
ep_r += r
if t:
print('Episode reward: ', ep_r)
break
Please checkout src for more information.
More environments are under development. If you have any problem or suggestion, please feel free to contact me (email, Twitter, LinkedIn).
This project is licensed under the MIT License - see the LICENSE file for details