Skip to content

gokhansolak/lfd-experiments-iros2019

Repository files navigation

Learning From Demonstration Experiments

This is the top package to test virtual springs and DMPs. It uses all other packages and contains ad-hoc programs. It contains the experiment procedure described in the paper:

  • G. Solak and L. Jamone. Learning by demonstration and robust control of dexterous in-hand robotic manipulation skills. In IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, 2019.

Youtube video

Warning: This package is not intended for general use. It contains ad-hoc code for experiments presented in the above paper. It may serve as example code, but running it directly on a different robot setup may be unsafe.

Install

Tested on Ubuntu 16.04, ROS-Kinetic and Ubuntu 18.04, ROS-Melodic

Usage

Learning DMPs

Learning from demonstration can be used to learn either joint-space or object-space trajectories. Use record.launch to record a trajectory first:

roslaunch lfd_experiments record.launch state_type:=object state_size:=9 time:=12 trajectory_name:=traj.txt

Parameters of record:

  • state_type: joint, object or fingertip.
  • state_size: Number of DMPs (dimensions). Joint count or 9 for object frame.
  • time: Recording time in seconds.
  • trajectory_name: Name of the trajectory file to be saved.

This node will save a traj.txt file in the ~/.ros directory. You can use this trajectory to train DMPs:

roslaunch dmp_tools train.launch dmp_name:=dmp.xml trajectory_name:=traj.txt

Trained DMP will appear in the ~/.ros directory (unless path specified). In order to reproduce it using the grasp_node, the DMP should be copied into data/grasp/dmp and an entry should be added to data/grasp/manipulate.yaml.

Please see the ros_dmp_tools readme for optional parameters of train.launch.

Experiments in the paper:

The demonstrations are recorded while the object is in grasp, gravity compensation and virtual springs are active. A grasp can be obtained using the grasp.launch and following the prompted instructions (see below). The user shall stop before applying an existing DMP, start the record.launch, and give a demonstration.

Reproducing a learned DMP

The experiment assumes that an object is standing in a predefined pose. A UR5 robot arm and a Allegro robot hand (mounted) are used to grasp, pick-up, manipulate and release the object. The experiment program is contained in grasp_node.cpp file.

The robot arm is moved to position the hand over the object. Then the hand takes a pregrasp shape and approaches the object following a Cartesian path. Once near the object, a pre-learned DMP is executed to close fingers around the object. At this point, the user is involved to make sure that the grasp is proper, because this experiment does not use any grasp planners. When the fingers are on the object, virtual springs are activated to apply grasping forces on the object. The object is then lifted and a learned DMP is executed.

The program prompts the user at each phase to choose the type of grasp and the DMP to execute.

The arm is controlled using MoveIt! framework (packages arq_ur5, ur5_allegro_moveit). Allegro hand is controlled using KDL library (allegro_hand_kdl). Fingers are closed using DMP framework and grasping forces are applied using the virtual spring framework (ros_dmp_tools, spring_framework). You can load the RViz configuration file data/config/lfd_experiments.rviz to view the related markers.

Run in real-world:

The robot should be connected and the drivers should be running (more info). Run each line in separate terminals:

roslaunch ur5_allegro_moveit ur5_allegro_bringup.launch robot_ip:=177.22.22.11 optoforce:=true
roslaunch arq_ur5 moveit_rviz.launch allegro:=true
roslaunch arq_ur5 load_scene.launch scene:=qmul_realistic_world
roslaunch lfd_experiments grasp.launch

grasp.launch will start the grasp demo program. The user should input the grasp name at the beginning of the program. This should be the name of an existing entry in /data/grasp/dmp\_grasps.xml file. The program will prompt the user before executing possibly dangerous moves. The user should check the proposed move in RViz before confirming.

optoforce argument is optional to load the robot with the optoforce fingertips. Default is true.

Run in simulation:

Simulation doesn't include physics. It is just RViz and fake controllers.

roslaunch allegro_hand_kdl allegro_torque.launch sim:=true RVIZ:=false
roslaunch ur5_allegro_moveit demo.launch optoforce:=true
roslaunch arq_ur5 load_scene.launch scene:=bimanual_computers
roslaunch lfd_experiments grasp.launch sim:=true

About

Code base for the experiments conducted in the paper "Learning by Demonstration and Robust Control of Dexterous In-Hand Robotic Manipulation Skill" by Gokhan Solak and Lorenzo Jamone.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published