This is the official repo for Rodrigues Network.
Network implementation: src/networks/backbones/rodrigues_network.py.
We tested the environment with the following settings:
- Ubuntu 24.04
- CUDA 12.8
- NVIDIA 5090
- GCC-13
For other settings, please make sure that
- The CUDA version supports your GPU's compute capability
- The CUDA version supports your Linux version
- The CUDA version supports your GCC version
- Environment variables
PATHLD_LIBRARY_PATHCUDA_HOMEare properly set - When installing pytorch, select wheel built with your CUDA version
Or else some modules would fail to build
Install git-lfs : sudo apt install git-lfs
Start from an empty folder, and execute the following commands
conda create -n rodrinet python=3.9 -y
conda activate rodrinet
pip install "setuptools<81.0.0" wheel
pip install torch --index-url https://download.pytorch.org/whl/cu128
# or your specific CUDA version
pip install trimesh plotly transforms3d urdf_parser_py tensorboard rich pycollada wandb mani_skill
git clone --branch v0.7.8 https://github.com/NVlabs/curobo.git
pip install -e curobo --no-build-isolation
# might take couple of minutes to build
git clone https://github.com/facebookresearch/pytorch3d.git
pip install -e pytorch3d --no-build-isolation
# might take couple of minutes to build
git clone --branch v0.36.0 git@github.com:huggingface/diffusers.git
pip install -e diffusers
pip install torch-geometric
git clone https://github.com/mzhmxzh/RodriguesNetwork.git
cd RodriguesNetwork
cd rodrigues_transform
python setup.py build_ext --inplace
cd ..This experiment's data is generated on the fly.
Start from this project's folder and execute the following commands:
# make directory
mkdir -p data/Motion
# generate training sets
python -m src.scripts.generate_motion_dataset --seed 0 --dataset_config train.yaml --num_traj 100000 --filename train_1e5.pt
# [Optional] to reproduce scaling results
python -m src.scripts.generate_motion_dataset --seed 0 --dataset_config train.yaml --num_traj 1000 --filename train_1e3.pt
python -m src.scripts.generate_motion_dataset --seed 0 --dataset_config train.yaml --num_traj 10000 --filename train_1e4.pt
python -m src.scripts.generate_motion_dataset --seed 0 --dataset_config train.yaml --num_traj 1000000 --filename train_1e6.pt
# generate validation set
python -m src.scripts.generate_motion_dataset --seed 42 --dataset_config train.yaml --num_traj 10000 --filename test_1e4_seed42.pt
# generate test set
python -m src.scripts.generate_motion_dataset --seed 39 --dataset_config train.yaml --num_traj 10000 --filename test_1e4_seed39.ptStep 1: make directory data/Imitation
Step 2: download expert trajectories
python -m mani_skill.utils.download_demo "PushCube-v1" -o "data/Imitation"
python -m mani_skill.utils.download_demo "PickCube-v1" -o "data/Imitation"
python -m mani_skill.utils.download_demo "StackCube-v1" -o "data/Imitation"
python -m mani_skill.utils.download_demo "PegInsertionSide-v1" -o "data/Imitation"
python -m mani_skill.utils.download_demo "PlugCharger-v1" -o "data/Imitation"Known issue: something is wrong with PlugCharger-v1's data: the downloaded data has reward_type="dense", while the actual ManiSkill environment builder only supports "none" or "sparse". In order to successfully replay the trajectories, go to data/Imitation/PlugCharger-v1/motionplanning/trajectory.json and change "reward_mode": "dense" to "reward_mode": "sparse".
Step 3: replay
python -m mani_skill.trajectory.replay_trajectory \
--traj-path "data/Imitation/PushCube-v1/motionplanning/trajectory.h5" \
--use-first-env-state -c pd_joint_delta_pos -o state \
--save-traj -n 10 -b cpu
python -m mani_skill.trajectory.replay_trajectory \
--traj-path "data/Imitation/PickCube-v1/motionplanning/trajectory.h5" \
--use-first-env-state -c pd_joint_delta_pos -o state \
--save-traj -n 10 -b cpu
python -m mani_skill.trajectory.replay_trajectory \
--traj-path "data/Imitation/StackCube-v1/motionplanning/trajectory.h5" \
--use-first-env-state -c pd_joint_delta_pos -o state \
--save-traj -n 10 -b cpu
python -m mani_skill.trajectory.replay_trajectory \
--traj-path "data/Imitation/PegInsertionSide-v1/motionplanning/trajectory.h5" \
--use-first-env-state -c pd_joint_delta_pos -o state \
--save-traj -n 10 -b cpu
python -m mani_skill.trajectory.replay_trajectory \
--traj-path "data/Imitation/PlugCharger-v1/motionplanning/trajectory.h5" \
--use-first-env-state -c pd_joint_delta_pos -o state \
--save-traj -n 10 -b cpuTODO
Make directories experiments and wandb.
Login to wandb and add your wandb name to --entity to log curves.
Replace --model_type RodriguesFK with a baseline from ['MLPFK', 'FKMLPFK', 'RodriguesFK', 'GCNFK', 'BoTFK', 'BoTFKNoMix', 'TrFK'] to compare.
Training:
python -m src.scripts.train_fk \
--entity YOUR_WANDB_NAME \
--seed 1 --exp_name exp_1 \
--model_type RodriguesFK \
--training_config benchmark_fk.yaml \
--trainset_config train.yaml \
--testset_config test.yamlEvaluation:
python -m tests.eval.FK.evaluate_models \
--dataset_config test.yaml \
--model_type RodriguesFK \
--exp_name exp_1 \
--ckpt_iter bestReplace --model_type RodriguesMotion with a baseline from ['MLPMotion', 'GCNMotion', 'BoTMotion', 'TrMotion', 'RodriguesMotion'] to compare.
Training:
python -m src.scripts.train_motion \
--entity YOUR_WANDB_NAME \
--seed 1 \
--exp_name exp_3M_1e5_1 \
--model_type RodriguesMotion \
--training_config benchmark_motion.yaml \
--trainset_config train_1e5.yaml \
--testset_config test_1e4_seed42.yamlEvaluation:
python -m tests.eval.Motion.evaluate_models \
--dataset_config test_1e4_seed39.yaml \
--model_type RodriguesMotion \
--exp_name exp_3M_1e5_1 \
--ckpt_iter bestReplace --model_type RodriguesImitation with a baseline from ['UNetImitation', 'TrImitation', 'RodriguesImitation'] to compare.
Replace --env_id PickCube-v1 with a task from ['PushCube-v1', 'PickCube-v1', 'StackCube-v1', 'PegInsertionSide-v1', 'PlugCharger-v1'].
Training:
python -m src.scripts.train_imitation \
--entity YOUR_WANDB_NAME \
--seed 1 \
--exp_name exp_17M_1 \
--model_type RodriguesImitation \
--env_id PickCube-v1Evaluation:
python -m tests.eval.Imitation.evaluate_models \
--env_id PickCube-v1 \
--model_type RodriguesImitation \
--exp_name exp_17M_1 \
--capture_videoTODO