Skip to content

Dynamic, Non-Prehensile, and Underactuated Object Locomotion through Reinforcement Learning

License

Notifications You must be signed in to change notification settings

JS-RML/learn_rockwalk

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

85 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Learning to Rock-and-walk

1. Overview

Rock-and-walk (video, code) is a method for dynamic, non-prehensile and underactuated object transport. Rock-and-walk is motivated by an interesting question in archaeology, how the giant rock statues of Easter Island (known as “moai”) were transported several hundred years ago, and a recent demonstration performed by archaeologists that it is possible to walk the statue by repetitive rocking. Here, we show that such manipulation capability can be acquired through reinforcement learning in dynamic simulation environment featuring the object and the support surface, and deployed on real robot systems. We demonstrate successful object transport with the learned policy through a set of simulated and real-world experiments, performed with a robot arm and an aerial robot interacting with the object in a non-prehensile manner. While the object, which is in contact with a support surface, oscillates sideways passively under gravity, the robot uses the learned policy to move the object forward with a steady gait by regulating the mechanical energy and the posture of the object. Our experiments show that the learned policy can transport the object through unmodeled effects of terrain and perturbation.

Related Paper

A. Nazir, P. Xu, J. Rojas, and J. Seo, "Learning to Rock-and-Walk: Dynamic, Non-Prehensile, and Underactuated Object Locomotion through Reinforcement Learning," submitted to ICRA'22.

Video supplement to the paper

2. Quick Start: Object Transport with Learned Policy in Simulation

What you need:

Step 1:

Clone the repository

git clone https://github.com/HKUST-RML/learn_rockwalk.git

Step 2:

Test object transport with learned policy for:

a. Cone-Shaped Model in Simulation

cd learn_rockwalk/cone_simulation/Rock-Walk
pip install -e .

cd ..
python main_sim.py

b. Moai in Simulation

cd learn_rockwalk/moai_simulation/Rock-Walk
pip install -e .

cd ..
python main_sim.py

Type 'yes' when prompted to test the learned policy.

3. Real-World Impelmentation with Robot Arm

What you need:

Hardware

Software

Step 1: Set up motion shield and get object state

1a. Fix Arduino mega (equipped with motion shield) on the cone object and connect it to the laptop.

1b. Upload this code to Arduino mega using Arduino IDE

1c. Install the package rockwalk_kinematics in your ROS catkin workspace

catkin build

1d. Publish motion shield data in ROS using rosserial

rosrun rosserial_python serial_node.py _port:=/dev/ttyACM0 _baud:=115200

1e. Calibrate the motion shield by rotating the object at roughly 45 degrees and holding there for few seconds. You can check the calibration status using the command

rostopic echo /calibration_motion_shield

An output '3' would mean successful calibration.

1f. Place the object upright on floor facing the direction in which it is to be transported. Then, run the following node to compute object's state from motion shield data:

rosrun rockwalk_kinematics rockwalk_kinematics_node

1g. Lastly, perform sanity check on the computed Euler angles and their time rates:

rostopic echo /body_euler
rostopic echo /body_twist

Step 2: Transport cone object with robot arm using the learned policy

2a. Mount the caging end-effector on the wrist of the robot arm.

CAUTION: REAL ROBOT MOTION AHEAD. PLEASE ENSURE APPROPRIATE SAFETY MEASURES.

2b. Bring the robot arm to an appropriate start configuration by running the script

cd cone_real_arm/
python main_real.py

2c. Configure the object so that its vertical rod is accomodated inside the hole of the caging end-effector. Then, press return key when prompted to execute real robot motion.

2d. You can obtain slower (faster) end-effector speed by decreasing (increasing) the value of the parameter action_scale.

4. Real-World Impelmentation with Quadrotor

Hardware

Software

Run real experiments

  1. Open OptiTrack
  2. roslaunch rnw_ros ground_station_caging_rl.launch on ground station i.e. your laptop
  3. SSH into the aircraft and roslaunch rnw_ros flight.launch

Addtional details can be found in the cone_real_quadrotor subdirectory

Contact Us

For technical enquiry, please contact Abdullah Nazir (sanazir[at]connect.ust.hk) and Pu Xu (pxuaf[at]connect.ust.hk).

About

Dynamic, Non-Prehensile, and Underactuated Object Locomotion through Reinforcement Learning

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published