Skip to content

MMintLab/calamari

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

65 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

CALAMARI

Code style: black CodeQL

This is a github repository of a CALAMARI: Contact-Aware and Language conditioned spatial Action MApping for contact-RIch manipulation (CoRL 2023).

We trained with the GPU A6000 and ran inference on the RTX 3080 and RTX 2070.

1. install project and Dependencies

conda create -n calamari python=3.8
conda activate calamari
conda install pytorch==1.7.0 torchvision==0.8.0 torchaudio==0.7.0 cudatoolkit=11.0 -c pytorch
conda env create -f environment.yml

We utilize heatmap extraction from Semantic Abstraction (Huy et al., CoRL 2022)."

git submodule add -f git@github.com:yswi/semantic-abstraction.git calamari/semantic_abstraction
git submodule add -b ros -f git@github.com:UM-ARM-Lab/pytorch_mppi.git calamari/pytorch_mppi

2. Install Project

pip install -e .

3. Download Dataset & Pretrained

  1. Download the dataset.zip from the link. Unzip the folder under dataset/ folder.

  2. Download the pretrained .pth from the link. Put them under script/model/ folder.

  3. As a result, the directory should be

── calamari
│   ├── calamari
│   ├── dataset
│   │   │── wipe_desk
│   │   │── sweep_to_dustpan
│   │   │── push_buttons
│   │   │── ...
│   ├── script
│   ├── ├── model
...

4. (optionally) Train Policy from Scratch

python script/train.py --task <TASK NAME> --logdir <FOLDER NAME> --gpu_id <GPU IDX>

Note: We use A6000 (48G) for training. You can decrease the batch size in config_multi_conv.py to match your GPU capacity, but a performance drop should be expected.

5. Inference

Inference requires installation of CoppeliaSim, PyRep, and RLBench. To do so, clone RLbench repo to your project directory as

git clone git@github.com:MMintLab/rlbench.git
cd RLbench

and follow the instructions for PyREP and CoppeliaSim setup.

python script/plan/mpc.py --task <task name> --txt_idx <txt idx> --ttm_idx <ttm idx> -v <task variation idx>  -s 0 --logdir <log dir>

Below are the combinations of parameters we used for the paper. You can find the inference code from

task object ttm idx task variation idx
wipe train obj 0 0
test obj1 1 0
test obj2 2 0
sweep train obj 0 0
test obj1 1 0
test obj2 2 0
push train obj 0 0
test obj1 0 1
test obj2 0 2

(optionally) Train with Custom Data.

Generate heatmaps of the custom data.

 python script/dataprocessing/generate_heatmap.py --task <TASK>

Notes

This repository trains the policy based on the RLbench dataset. Please reach out to the author yswi@umich.edu for further questions.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages