Tactile Gym 2.0: Project Website • Paper
Tactile Gym 1.0: Project Website • Paper
This repo contains a suite of reinfocement learning environments built on top of Tactile Sim. These environments use tactile data as the main form of observations when solving tasks. This can be paired with Tactile Sim2Real domain adaption for transferring learned policies to the real world.
This repo refers to the paper "Sim-to-real Deep Reinforcement Learning for Comparing Low-cost High-Resolution Robot Touch". If you find it useful for your research, please cite our paper.
- Installation
- Testing Environments
- Tactile Robot Environment Details
- Observation Details
- Training Agents
- Re-training Agents
- Pretrained Agents
- Alternate Robot Arms
- Additional Info
This repo has only been developed and tested with Ubuntu 18.04 and python 3.8.
git clone https://github.com/dexterousrobot/tactile_gym
cd tactile_gym
pip install -e .
Demonstration files are provided in the example directory. From the base directory run
python examples/demo_env.py -env example_arm-v0
alternate envs can be specified but setting the -env
argurment to any of the following: example_arm-v0
edge_follow-v0
surface_follow-v0
object_roll-v0
object_push-v0
object_balance-v0
.
Usage: You can specify a desired robot arm and a tactile sensor and other environment parameters within the demo_env.py
file.
Env. Name | Description |
---|---|
edge_follow-v0 |
|
surface_follow-v0 |
|
surface_follow-v1 |
|
surface_follow-v2 |
|
object_roll-v0 |
|
object_push-v0 |
|
object_balance-v0 |
|
All environments contain 4 main modes of observation:
Observation Type | Description |
---|---|
oracle |
Comprises ideal state information from the simulator, which is difficult information to collect in the real world, we use this to give baseline performance for a task. The information in this state varies between environments but commonly includes TCP pose, TCP velocity, goal locations and the current state of the environment. This observation requires signifcantly less compute both to generate data and for training agent networks. |
tactile |
Comprises images (default 128x128) retrieved from the simulated optical tactile sensor attached to the end effector of the robot arm (Env Figures right). Where tactile information alone is not sufficient to solve a task, this observation can be extended with oracle information retrieved from the simulator. This should only include information that could be be easily and accurately captured in the real world, such as the TCP pose that is available on industrial robotic arms and the goal pose. |
visual |
Comprises RGB images (default 128x128) retrieved from a static, simulated camera viewing the environment (Env Figures left). Currently, only a single camera is used, although this could be extended to multiple cameras. |
visuotactile |
Combines the RGB visual and tactile image observations to into a 4-channel RGBT image. This case demonstrates a simple method of multi-modal sensing. |
When additional information is required to solve a task, such as goal locations, appending _and_feature
to the observation name will return the complete observation.
The environments use the OpenAI Gym interface so should be compatible with most reinforcement learning librarys.
We use stable-baselines3 for all training, helper scripts are provided in tactile_gym/sb3_helpers/
A simple experiment can be run with simple_sb3_example.py
, a full training script can be run with train_agent.py
. Experiment hyper-params are in the parameters
directory.
@InProceedings{lin2022tactilegym2,
title={Tactile Gym 2.0: Sim-to-real Deep Reinforcement Learning for Comparing Low-cost High-Resolution Robot Touch},
author={Yijiong Lin and John Lloyd and Alex Church and Nathan F. Lepora},
journal={IEEE Robotics and Automation Letters},
year={2022},
volume={7},
number={4},
pages={10754-10761},
editor={R. Liu A.Banerjee},
series={Proceedings of Machine Learning Research},
month={August},
publisher={IEEE},
doi={10.1109/LRA.2022.3195195}}
url={https://ieeexplore.ieee.org/abstract/document/9847020},
}
@InProceedings{church2021optical,
title={Tactile Sim-to-Real Policy Transfer via Real-to-Sim Image Translation},
author={Church, Alex and Lloyd, John and Hadsell, Raia and Lepora, Nathan F.},
booktitle={Proceedings of the 5th Conference on Robot Learning},
year={2022},
editor={Faust, Aleksandra and Hsu, David and Neumann, Gerhard},
volume={164},
series={Proceedings of Machine Learning Research},
month={08--11 Nov},
publisher={PMLR},
pdf={https://proceedings.mlr.press/v164/church22a/church22a.pdf},
url={https://proceedings.mlr.press/v164/church22a.html},
}