Skip to content
This repository has been archived by the owner on Sep 21, 2023. It is now read-only.

JieRen98/robosuite-benchmark-pmoe

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

14 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

robosuite v1.0 Benchmarking

Welcome to the robosuite v1.0 benchmarking repository! This repo is intended for ease of replication of our benchmarking results, as well as providing a skeleton for further experiments or benchmarking using our identical training environment.

In addition, this repo replaces the official rlkit used in robosuite-benchmark to rlkit-pmoe. Package dependency is the same as the origin robosuite-benchmark. Thanks to all the contributors of robosuite-benchmark.

Getting Started

Our benchmark consists of training Probabilistic Mixture-of-Experts agents implemented from rlkit-pmoe. We built on top of rlkit-pmoe's standard functionality to provide extra features useful for our purposes, such as video recording of rollouts and asymmetrical exploration / evaluation horizons.

To begin, start by cloning this repository from your terminal and moving into this directory:

$ git clone https://github.com/JieRen98/robosuite-benchmark-pmoe.git
$ cd robosuite-benchmark-pmoe

Our benchmarking environment consists of a Conda-based Python virtual environment running Python 3.7.4, and is supported for Mac OS X and Linux. Other versions / machine configurations have not been tested. Conda is a useful tool for creating virtual environments for Python, and can be installed here.

After installing Conda, create a new virtual environment using our pre-configured environment setup, and activate this environment. Note that we have to unfortunately do a two-step installation process in order to avoid some issues with precise versions:

$ conda env create -f environments/env.yml
$ source activate rb_bench

Next, we must install the modified rlkit: RLkit-PMOE. Go the rlkit-pmoe repository and clone and install it, in your preferred directory.

Copy the rlkit directory in rlkit-pmoe repo to the working directory.

$ (rb_bench) cd <PATH_TO_YOUR_RLKIT_LOCATION>
$ (rb_bench) git clone https://github.com/JieRen98/rlkit-pmoe.git
$ (rb_bench) cd rlkit-pmoe
$ (rb_bench) cp rlkit <PATH_TO_YOUR_WORKING_LOCATION>

The file tree should look like that:

.
├── environments
│ ├── env.yml
├── notebooks
│ ├── create_benchmark_environments.ipynb
│ └── create_plots.ipynb
├── rlkit
│ ├── core
│ ├── data_management
│ ├── envs
│ ├── exploration_strategies
│ ├── __init__.py
│ ├── launchers
│ ├── policies
│ ├── pythonplusplus.py
│ ├── samplers
│ ├── torch
│ └── util
├── scripts
│ ├── rollout.py
│ └── train.py
├── utils
│ ├── arguments.py
│ ├── rlkit_custom.py
│ └── rlkit_utils.py
└── README.md

Lastly, for visualizing active runs, we utilize rlkit's extraction of rllab's viskit package:

$ (rb_bench) cd <PATH_TO_YOUR_VISKIT_LOCATION>
$ (rb_bench) git clone https://github.com/vitchyr/viskit.git
$ (rb_bench) cd viskit
$ (rb_bench) pip install -e .

Running an Experiment

To validate our results on your own machine, or to experiment with another set of hyperparameters, we provide a training script as an easy entry point for executing individual experiments. Note that this repository must be added to your PYTHONPATH before running any scripts; this can be done like so:

$ (rb_bench) cd <PATH_TO_YOUR_ROBOSUITE_BENCHMARKING_REPO_DIR>
$ (rb_bench) export PYTHONPATH=.:$PYTHONPATH

For a given training run, a configuration must be specified -- this can be done in one of two ways:

  1. Command line arguments. It may be useful to specify your desired configuration on the fly, from the command line. However, as there are many potential arguments that can be provided for training, we have modularized and organized them within a separate arguments module that describes all potential arguments for a given script. Note that for this training script, the robosuite, agent, and training_args are relevant here. Note that there are default values already specified for most of these values.

  2. Configuration files. It is often more succinct and efficient to specify a configuration file (.json), and load this during runtime for training. If the --variant argument is specified, the configuration will be loaded and used for training. In this case, the resulting script execution line will look like so:

$ (rb_bench) python scripts/train.py --variant <PATH_TO_CONFIG>.json

This is also a useful method for automatically validating our benchmarking experiments on your own machine, as every experiment's configuration is saved and provided on this repo. For an example of the structure and values expected within a given configuration file, please see this example.

Note that, by default, all training runs are stored in log/runs/ directory, though this location may be changed by setting a different file location with the --log_dir flag.

Visaulizing Training

During training, you can visualize current logged runs using viskit (see Getting Started). Once viskit is installed and configured, you can easily see your results as follows at port 5000 in your browser:

$ (rb_bench) python <PATH_TO_VISKIT_DIR>/viskit/frontend.py <PATH_TO_LOG_DIR>

Visualizing Rollouts

We provide a rollout script for executing and visualizing rollouts using a trained agent model. The relevant command-line arguments that can be specified for this script are the rollout args in the util/arguments.py module. Of note:

  • load_dir specifies the path to the logging directory that contains both the variant.json and params.pkl specifying the training configuration and agent model, respectively,

  • camera specifies the robosuite-specific camera to use for rendering images / video (frontview and agentview are common choices),

  • record_video specifies whether to save a video of the resulting rollouts (note that if this is set, no onscreen renderer will be used!)

A simple example for using this rollout script can be seen as follows:

$ (rb_bench) python scripts/rollout.py --load_dir runs/Door-Panda-OSC-POSE-SEED17/Door_Panda_OSC_POSE_SEED17_2020_09_13_00_26_44_0000--s-0/ --horizon 200 --camera frontview

This will execute the trained model configuration used in our benchmarking from the Door environment with Panda / OSC_POSE using seed 17, without rollouts only occurring up to 200 timesteps per episode and using the frontview camera for visualization.

Problems?

For any problems encountered when running this repo, please submit an issue!

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages