DynamicGraspLab is a reinforcement learning environment extension built upon NVIDIA Isaac Lab. It focuses on the research and implementation of robotic dynamic object grasping tasks. This project provides a modular framework for training agents to grasp objects that are in motion.
- Dual Observation Modes:
grasp_object_state: A "God's-eye view" environment based on the true states of the object and the robot.grasp_RGBD: A vision-based environment that uses a simulated onboard RGB-D camera, more closely mirroring real-world applications.
- Complete RL Workflow: Integrated training and evaluation scripts using the skrl library with the PPO algorithm.
- Modular Environment Design: Easily customize reward functions, curriculums, command generators, and more.
- Ready to Use: Comes with detailed setup instructions and pre-trained models to quickly reproduce results.
-
Install Isaac Lab: First, follow the official installation guide to set up Isaac Lab. The
condainstallation is recommended for easier terminal operations. -
Clone This Repository: Clone this repository to a location outside of the main
IsaacLabdirectory.git clone https://github.com/xwx555/DynamicGraspLab.git cd DynamicGraspLab -
Install the Extension: Using the Python interpreter where Isaac Lab is installed, install this package in editable mode (
-e).# If not using a venv, use the full path to isaaclab.sh, e.g.: # /path/to/isaaclab/isaaclab.sh -p python -m pip install -e source/DynamicGraspLab python -m pip install -e source/DynamicGraspLab
-
Verify the Installation: Run the
list_envs.pyscript to check if the new environments are registered.python scripts/list_envs.py
You should see
DynamicGraspLab-Grasp-UR5-Object-State-v0,DynamicGraspLab-Grasp-UR5-RGBD-v0, and their corresponding Play versions in the output list.
We provide scripts for training and evaluation using the skrl library.
-
Train a state-based agent:
# Note: --num_envs depends on your GPU python scripts/skrl/train.py --task DynamicGraspLab-Grasp-UR5-Object-State-v0 --headless --num_envs 1024 -
Train a vision-based (RGBD) agent:
# Note: --num_envs depends on your GPU python scripts/skrl/train.py --task DynamicGraspLab-Grasp-UR5-RGBD-v0 --enable_cameras --headless --num_envs 256Logs and model checkpoints will be saved to the
logs/skrldirectory by default.
While your agent is training, you can monitor its progress in real-time using TensorBoard, a visualization toolkit.
- Open a new terminal (leave the training process running in the original one).
- Navigate to your
DynamicGraspLabproject directory. - Launch TensorBoard by pointing it to the log directory:
tensorboard --logdir logs/skrl/
- Open your web browser and go to the URL provided in the terminal, which is typically
http://localhost:6006.
This will open a dashboard where you can visualize learning curves for metrics like reward, episode length, and loss functions.
You can download pre-trained models from the Releases page of this repository.
-
Run a pre-trained state-based agent:
- Create the directory if it doesn't exist:
mkdir -p logs/skrl/grasp_object_state - Download the model
grasp_object_state_best.ptand place it in that folder. - Run the evaluation script:
python scripts/skrl/play.py --task DynamicGraspLab-Grasp-UR5-Object-State-Play-v0 --checkpoint logs/skrl/grasp_object_state/grasp_object_state_best.pt
- Create the directory if it doesn't exist:
-
Run a pre-trained vision-based agent:
- Create the directory if it doesn't exist:
mkdir -p logs/skrl/grasp_RGBD - Download the model
grasp_RGBD_best.ptand place it in that folder. - Run the evaluation script:
python scripts/skrl/play.py --task DynamicGraspLab-Grasp-UR5-RGBD-Play-v0 --enable_cameras --checkpoint logs/skrl/grasp_RGBD/grasp_RGBD_best.pt
- Create the directory if it doesn't exist:
If you use this framework or parts of the code in your research, please cite our paper:
@article{Liang2025Robotic,
title = {Robotic end-to-end dynamic grasping method based on curriculum reinforcement learning},
author = {Liang, Yanyang and Xie, Wenxuan and Cui, Wei and Lyu, Hongfei and Li, Da and Zhong, Dongzhou},
journal = {Journal of Computer Applications},
year = {2025},
doi = {10.11772/j.issn.1001-9081.2025060749},
issn = {1001-9081},
note = {in Chinese}
}If you use the codebase directly, you can also cite this repository:
@software{DynamicGraspLab,
author = {Xie, Wenxuan},
title = {DynamicGraspLab: A Framework for Dynamic Grasping in Isaac Lab},
url = {https://github.com/xwx555/DynamicGraspLab},
year = {2025}
}