Created by Kuan Fang, Yuke Zhu, Animesh Garg, Silvio Savarese and Li Fei-Fei
If you find this code useful for your research, please cite:
@article{fang2019cavin,
title={Dynamics Learning with Cascaded Variational Inference for Multi-Step Manipulation},
author={Kuan Fang and Yuke Zhu and Animesh Garg and Silvio Savarese and Li Fei-Fei},
journal={Conference on Robot Learning (CoRL)},
year={2019}
}
This repo is an implementation of the CAVIN planner from our CoRL 2020 paper. You can checkout the project website for more information.
The code is based on TF-Agent. The core algorithm can be applied to any tasks designed with the OpenAI Gym interface given the reward functions. We demonstrate CAVIN with three planar pushing tasks with three goals and constraints in simulation and the real world. The task environments are implemented in RoboVat.
-
Create a virtual environment (recommended)
Create a new virtual environment in the root directory or anywhere else:
virtualenv --system-site-packages -p python3 .venv
Activate the virtual environment every time before you use the package:
source .venv/bin/activate
And exit the virtual environment when you are done:
deactivate
-
Install the package
The package can be installed by running:
pip install -r requirements.txt
Install tf-slim and RoboVat using
python setup.py install
(will be added to the requirements soon). -
Download data
Download and unzip the assets, configs and models folder to the root directory:
wget ftp://cs.stanford.edu/cs/cvgl/robovat/assets.zip wget ftp://cs.stanford.edu/cs/cvgl/robovat/configs.zip wget ftp://cs.stanford.edu/cs/cvgl/cavin/models.zip unzip assets.zip unzip configs.zip unzip models.zip
To execute an planar pushing task (e.g. crossing) with a trained CAVIN model, we can run:
python run_env.py \
--env PushEnv \
--env_config configs/envs/push_env.yaml \
--policy_config configs/policies/push_policy.yaml \
--config_bindings "{'MAX_MOVABLE_BODIES':3,'NUM_GOAL_STEPS':3,'TASK_NAME':'crossing','LAYOUT_ID':0}" \
--policy CavinPolicy --checkpoint models/baseline_20191001_cavin/ \
--debug 1
Note: The code was originally developed with PyBullet 1.8.0. As we switched to a newer version of the package, we observed that there is a discrepancy in terms of simulation results although it is not a significant change.
We suggest running the data collection script in parallel on multiple CPU clusters, since the data collection may take around 10k-20k CPU hours. To collect task agnostic interactions using the heuristic pushing policy:
python tfrecord_collect.py \
--env PushEnv \
--env_config configs/envs/push_env.yaml \
--policy HeuristicPushPolicy \
--policy_config configs/policies/push_policy.yaml \
--rb_dir episodes/task_agnostic_interactions/
Some of the collected files might be corrupted due to unexpected termination of the running script. To filter the corrupted files:
python filter_corrupted_tfrecords.py --data episodes/task_agnostic_interactions/
Before training, split the data into train
and eval
folders.
To train the CAVIN model on the collected data:
python tfrecord_train_eval.py \
--env PushEnv \
--env_config configs/envs/push_env.yaml \
--policy_config configs/policies/push_policy.yaml \
--rb_dir episodes/task_agnostic_interactions/ \
--agent cavin \
--working_dir models/YOUR_MODEL_NAME
To run in different tasks, we can set different values in --config_bindings
. Specifically, 'TASK_NAME'
can be set to 'clearing'
, 'insertion'
or 'crossing'
and 'LAYOUT_ID'
can be set to 0
, 1
or 2
.