Skip to content

StanfordVL/robovat

master
Switch branches/tags
Code

Latest commit

 

Git stats

Files

Permalink
Failed to load latest commit information.
Type
Name
Latest commit message
Commit time
bin
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

RoboVat

About
Installation
Examples
Citation

About

RoboVat is a tookit for fast development of robotic task environments in simulation and the real world. It provides unified APIs for robot control and perception to bridge the reality gap. Its name is derived from brain in a vat.

Currently, RoboVat supports Sawyer robot via Intera SDK. The simulatied environments run with PyBullet. The codebase is under active development and more environments will be included in the future.

Installation

  1. Create a virtual environment (recommended)

    Create a new virtual environment in the root directory or anywhere else:

    virtualenv --system-site-packages -p python .venv

    Activate the virtual environment every time before you use the package:

    source .venv/bin/activate

    And exit the virtual environment when you are done:

    deactivate
  2. Install the package

    Using pip to install the package:

    pip install robovat

    The package can also be installed by running:

    python setup.py install
  3. Download assets

    Download and unzip the assets folder from Box or the FTP link below to the root directory:

    wget ftp://cs.stanford.edu/cs/cvgl/robovat/assets.zip
    wget ftp://cs.stanford.edu/cs/cvgl/robovat/configs.zip
    unzip assets.zip
    unzip configs.zip

    If the assets folder is not in the root directory, remember to specify the argument --assets PATH_TO_ASSETS when executing the example scripts.

Examples

Command Line Interface

A command line interface (CLI) is provided for debugging purposes. We recommend running the CLI to test the simulation environment after installation and data downloading:

python tools/sawyer_cli.py --mode sim

Detailed usage of the CLI are explained in the source code of tools/sawyer_cli.py. The simulated and real-world Sawyer robot can be test using these instructions below in the terminal:

  • Visualize the camera images: v
  • Mouse click and reach: c
  • Reset the robot: r
  • Close and open the gripper: g and o

Planar Pushing

Execute a planar pushing tasks with a heuristic policy:

python tools/run_env.py --env PushEnv --policy HeuristicPushPolicy --debug 1

To execute semantic pushing tasks, we can add bindings to the configurations:

python tools/run_env.py --env PushEnv --policy HeuristicPushPolicy --env_config configs/envs/push_env.yaml --policy_config configs/policies/heuristic_push_policy.yaml --config_bindings "{'TASK_NAME':'crossing','LAYOUT_ID':0}" --debug 1

To execute the tasks with pretrained CAVIN planner, please see this codebase.

Process Objects for Simulation

Many simulators load bodies in the URDF format. Given an OBJ file, the corresponding URDF file can be generated by running:

python tools/convert_obj_to_urdf.py --input PATH_TO_OBJ --output OUTPUT_DIR

To simulate concave bodies, the OBJ file needs to be processed by convex decomposition. The URDF file of a concave body can be generated using V-HACD for convex decomposition by running:

python tools/convert_obj_to_urdf.py --input PATH_TO_OBJ --output OUTPUT_DIR --decompose 1

Citation

If you find this code useful for your research, please cite:

@article{fang2019cavin, 
    title={Dynamics Learning with Cascaded Variational Inference for Multi-Step Manipulation},
    author={Kuan Fang and Yuke Zhu and Animesh Garg and Silvio Savarese and Li Fei-Fei}, 
    journal={Conference on Robot Learning (CoRL)}, 
    year={2019} 
}