Official implementation of ICRA2024 paper "Sim-to-Real Grasp Detection with Global-to-Local RGB-D Adaptation"
The implementation is based on MMDetection and DGCAN.
Please refer to get_started.md for installation.
Please see get_started.md for the basic usage of MMDetection. We provide colab tutorial, and full guidance for quick run with existing dataset and with new dataset for beginners. There are also tutorials for finetuning models, adding new dataset, designing data pipeline, customizing models, customizing runtime settings and useful tools.
Please refer to FAQ for frequently asked questions.
To prepare the dataset,
-
download the Graspnet-1billion.
-
download our refined rectangle label and views from GoogleDrive.
-
download the pybullet_random .
-- data -- planer_graspnet -- scenes -- depths -- rect_labels_filt_top10%_depth2_nms_0.02_10 -- views -- models -- dex_models -- pybullet_random -- scenes -- rect_labels_filt_nms_0.02_10
For training GL-MSDA, the configuration files are in configs/sim_to_real/.
python tools/train.py configs/graspnet/simb2realsense_source_only.py
CUDA_VISIBLE_DEVICES=0,1 .tools/dist_train.sh configs/graspnet/simb2realsense_source_only.py 2
For testing GL-MSDA, only support single-gpu inference.
python tools/test_graspnet.py checkpoints/GL-MSDA/simb2realsense_fa.py checkpoints/GL-MSDA/simb2realsense_fa.pth --eval grasp
If any part of our paper and repository is helpful to your work, please generously cite with:
@InProceedings{Ma_2024_ICRA,
author = {Haoxiang, Ma and Ran, Qin and Modi, Shi and Boyang, Gao and Huang, Di},
title = {Sim-to-Real Grasp Detection with Global-to-Local RGB-D Adaptation},
booktitle = {International Conference on Robotics and Automation (ICRA)},
year = {2024}