NOTE: Check out the maintainted version of this source code here.
LANRO is a platform to study language-conditioned reinforcement learning with a synthetic caretaker providing instructions in hindsight. It had been published as part of our paper Grounding Hindsight Instructions in Multi-Goal Reinforcement Learning for Robotics.
pip install lanro-gym
git clone https://github.com/frankroeder/lanro-gym.git
cd lanro-gym/ && pip install -e .
or
# via https
pip install git+https://github.com/frankroeder/lanro-gym.git
# or ssh
pip install git+ssh://git@github.com/frankroeder/lanro-gym.git
import gym
import lanro_gym
env = gym.make('PandaStack2-v0', render=True)
obs = env.reset()
done = False
while not done:
obs, reward, done, info = env.step(env.action_space.sample())
env.close()
Click here for the environments README
It is also possible to manipulate the robot with sliders
python main.py -i --env PandaNLReach2-v0
or your keyboard
python main.py -i --keyboard --env PandaNLReach2-v0
We use pytest.
PYTHONPATH=$PWD pytest test/
Measure the FPS of your system:
PYTHONPATH=$PWD python examples/fps.py
This work uses code and got inspired by following open-source projects:
Homepage https://pybullet.org/
Source: https://github.com/bulletphysics/bullet3/tree/master/examples/pybullet
License: Zlib
Source: https://github.com/qgallouedec/panda-gym
License: MIT
Changes: The code structure of lanro-gym
contains copies and extensively modified parts of panda-gym
.
pybullet
@MISC{coumans2021,
author = {Erwin Coumans and Yunfei Bai},
title = {PyBullet, a Python module for physics simulation for games, robotics and machine learning},
howpublished = {\url{http://pybullet.org}},
year = {2016--2021}
}
panda-gym
@article{gallouedec2021pandagym,
title = {{panda-gym: Open-Source Goal-Conditioned Environments for Robotic Learning}},
author = {Gallou{\'e}dec, Quentin and Cazin, Nicolas and Dellandr{\'e}a, Emmanuel and Chen, Liming},
year = 2021,
journal = {4th Robot Learning Workshop: Self-Supervised and Lifelong Learning at NeurIPS},
}