Skip to content

Code for the paper Stability-guaranteed reinforcement learning for contact-rich manipulation, IEEE RA-L, 2020.

Notifications You must be signed in to change notification settings

shbz80/Stablevic27

Repository files navigation

Data-efficient model learning and prediction for contact-rich manipulation tasks

This repo is the codebase for the paper Stability-guaranteed reinforcement learning for contact-rich manipulation, Khader, S. A., Yin, H., Falco, P., & Kragic, D. (2020), IEEE Robotics and Automation Letters. [IEEE] [arXiv]

Paper abstract

Reinforcement learning (RL) has had its fair share of success in contact-rich manipulation tasks but it still lags behind in benefiting from advances in robot control theory such as impedance control and stability guarantees. Recently, the concept of variable impedance control (VIC) was adopted into RL with encouraging results. However, the more important issue of stability remains unaddressed. To clarify the challenge in stable RL, we introduce the term all-the-time-stability that unambiguously means that every possible rollout should be stability certified. Our contribution is a model-free RL method that not only adopts VIC but also achieves all-the-time-stability. Building on a recently proposed stable VIC controller as the policy parameterization, we introduce a novel policy search algorithm that is inspired by Cross-Entropy Method and inherently guarantees stability. Our experimental studies confirm the feasibility and usefulness of stability guarantee and also features, to the best of our knowledge, the first successful application of RL with all-the-time-stability on the benchmark problem of peg-in-hole.

About

Code for the paper Stability-guaranteed reinforcement learning for contact-rich manipulation, IEEE RA-L, 2020.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages