Skip to content

xwx555/DynamicGraspLab

Repository files navigation

DynamicGraspLab: A Framework for Dynamic Grasping in Isaac Lab

DynamicGraspLab Demo

DynamicGraspLab is a reinforcement learning environment extension built upon NVIDIA Isaac Lab. It focuses on the research and implementation of robotic dynamic object grasping tasks. This project provides a modular framework for training agents to grasp objects that are in motion.


✨ Features

  • Dual Observation Modes:
    • grasp_object_state: A "God's-eye view" environment based on the true states of the object and the robot.
    • grasp_RGBD: A vision-based environment that uses a simulated onboard RGB-D camera, more closely mirroring real-world applications.
  • Complete RL Workflow: Integrated training and evaluation scripts using the skrl library with the PPO algorithm.
  • Modular Environment Design: Easily customize reward functions, curriculums, command generators, and more.
  • Ready to Use: Comes with detailed setup instructions and pre-trained models to quickly reproduce results.

🔧 Installation

  1. Install Isaac Lab: First, follow the official installation guide to set up Isaac Lab. The conda installation is recommended for easier terminal operations.

  2. Clone This Repository: Clone this repository to a location outside of the main IsaacLab directory.

    git clone https://github.com/xwx555/DynamicGraspLab.git
    cd DynamicGraspLab
  3. Install the Extension: Using the Python interpreter where Isaac Lab is installed, install this package in editable mode (-e).

    # If not using a venv, use the full path to isaaclab.sh, e.g.:
    # /path/to/isaaclab/isaaclab.sh -p python -m pip install -e source/DynamicGraspLab
    python -m pip install -e source/DynamicGraspLab
  4. Verify the Installation: Run the list_envs.py script to check if the new environments are registered.

    python scripts/list_envs.py

    You should see DynamicGraspLab-Grasp-UR5-Object-State-v0, DynamicGraspLab-Grasp-UR5-RGBD-v0, and their corresponding Play versions in the output list.


🚀 Getting Started

We provide scripts for training and evaluation using the skrl library.

Training

  • Train a state-based agent:

    # Note: --num_envs depends on your GPU
    python scripts/skrl/train.py --task DynamicGraspLab-Grasp-UR5-Object-State-v0 --headless --num_envs 1024
  • Train a vision-based (RGBD) agent:

    # Note: --num_envs depends on your GPU
    python scripts/skrl/train.py --task DynamicGraspLab-Grasp-UR5-RGBD-v0 --enable_cameras --headless --num_envs 256

    Logs and model checkpoints will be saved to the logs/skrl directory by default.

Monitoring Training with TensorBoard

While your agent is training, you can monitor its progress in real-time using TensorBoard, a visualization toolkit.

  1. Open a new terminal (leave the training process running in the original one).
  2. Navigate to your DynamicGraspLab project directory.
  3. Launch TensorBoard by pointing it to the log directory:
    tensorboard --logdir logs/skrl/
  4. Open your web browser and go to the URL provided in the terminal, which is typically http://localhost:6006.

This will open a dashboard where you can visualize learning curves for metrics like reward, episode length, and loss functions.

Evaluation and Demo

You can download pre-trained models from the Releases page of this repository.

  • Run a pre-trained state-based agent:

    1. Create the directory if it doesn't exist: mkdir -p logs/skrl/grasp_object_state
    2. Download the model grasp_object_state_best.pt and place it in that folder.
    3. Run the evaluation script:
    python scripts/skrl/play.py --task DynamicGraspLab-Grasp-UR5-Object-State-Play-v0 --checkpoint logs/skrl/grasp_object_state/grasp_object_state_best.pt
  • Run a pre-trained vision-based agent:

    1. Create the directory if it doesn't exist: mkdir -p logs/skrl/grasp_RGBD
    2. Download the model grasp_RGBD_best.pt and place it in that folder.
    3. Run the evaluation script:
    python scripts/skrl/play.py --task DynamicGraspLab-Grasp-UR5-RGBD-Play-v0 --enable_cameras --checkpoint logs/skrl/grasp_RGBD/grasp_RGBD_best.pt

📝 Citation

If you use this framework or parts of the code in your research, please cite our paper:

@article{Liang2025Robotic,
  title   = {Robotic end-to-end dynamic grasping method based on curriculum reinforcement learning},
  author  = {Liang, Yanyang and Xie, Wenxuan and Cui, Wei and Lyu, Hongfei and Li, Da and Zhong, Dongzhou},
  journal = {Journal of Computer Applications},
  year    = {2025},
  doi     = {10.11772/j.issn.1001-9081.2025060749},
  issn    = {1001-9081},
  note    = {in Chinese}
}

If you use the codebase directly, you can also cite this repository:

@software{DynamicGraspLab,
  author = {Xie, Wenxuan},
  title  = {DynamicGraspLab: A Framework for Dynamic Grasping in Isaac Lab},
  url    = {https://github.com/xwx555/DynamicGraspLab},
  year   = {2025}
}

About

A Framework for Dynamic Grasping in Isaac Lab

Topics

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages