Skip to content

Latest commit

 

History

History
97 lines (59 loc) · 7.12 KB

README.md

File metadata and controls

97 lines (59 loc) · 7.12 KB

GRASPA score computation

We provide a bash script to automatically compute all the scores defined within GRASPA 1.0.

In order to properly run the script, the user is required to fill some information in:

  • REACHED_POSES_FOLDER, the folder including the data collected during the reachability test. An example of reachability folder is given in GRASPA-test, that collects the data acquired on the iCub humanoid robot.

  • FILE_CAMERA_CALIBRATION, the file including the data collected during the camera calibration test. An example of camera calibration file is given in GRASPA-test, that collects the data acquired on the iCub humanoid robot.

  • GRASPS_FOLDER, the folder including the data collected during the grasp execution on a specific layout. An example of grasps folder is given in GRASPA-test, that collects the data acquired on the iCub humanoid robot.

  • LAYOUT_NAME, the name of the layout on which the grasping data have been collected. Accepted labels: Benchmark_Layout_0, Benchmark_Layout_1, Benchmark_Layout_2.

  • MODALITY to specify if the benchmarking is executed in isolation (label: isolation) or in the clutter (label: clutter).

  • THRES_POS_REACH to specify the threshold on the minimum acceptable position error for the reachability test.

  • THRES_ORIE_REACH to specify the threshold on the minimum acceptable orientation error for the reachability test.

  • THRES_REACH to specify the percentage of poses to be reached with the specified position and orientation thresholds to consider a region reachable by the robot.

  • THRES_POS_CAM to specify the threshold on the minimum acceptable position error for the camera calibration test.

  • THRES_ORIE_CAM to specify the threshold on the minimum acceptable orientation error for the camera calibration test.

  • THRES_CAM to specify the percentage of poses to be reached with the specified position and orientation thresholds to consider a region with a good camera calibration of the robot.

First, the grasp quality for each grasp in the specific layout is evaluated using compute-grasp-quality. The grasp qualities are added in the grasps data and, finally, each score of the benchmark is computed by scores_evaluation.py.

Output example

Grasp quality evaluation

The program will load the scene, objects and the robot end effector. It will compute the grasp quality for every pose for every object in the layout according to the GWS metric. As output, it will show something like

grasp-quality-visu

If there are more grasps planned for the same object, more instances of the end effector will show up in the corresponding poses.

The script will write the computation results by adding a <ComputedQuality> field to the grasp XML files. In this example, planning 5 grasps for each object results as the following field being added for the Banana object:

<ComputedQuality>
    <Grasp name="Grasp 0" quality_collision_free="0.357533" quality_overall="0.250273"/>
    <Grasp name="Grasp 1" quality_collision_free="0.0883257" quality_overall="0.0529954"/>
    <Grasp name="Grasp 2" quality_collision_free="0.372534" quality_overall="0.149013"/>
    <Grasp name="Grasp 3" quality_collision_free="0.37403" quality_overall="0.299224"/>
    <Grasp name="Grasp 4" quality_collision_free="0.438608" quality_overall="0.219304"/>
</ComputedQuality>

To compute grasp quality, each grasp is perturbed in both position and orientation and the results are averaged.

  • quality_collision_free refers to the quality of each grasp planned for the object (averaged over the set of perturbations) only in case the grasps are not in a collision state with the object
  • quality_overall refers to the average quality of all perturbed grasps, regardless of whether they are initially in collision or not.

Benchmark scores computation

  • The benchmark scores are computed for one layout per time:

  • The reachability and camera calibration scores are computed by comparing the poses defined within the benchmark with those acquired by the user:

  • The graspability, binary success and grasp stability are just read from the files provided by the user:

  • The grasp quality are read from the files properly filled by compute-grasp-quality:

  • In the clutter modality, also the obstacle avoidance scores are read from the files provided by the user:

  • Then, the final score for the layout under test is provided: