No description, website, or topics provided.
Clone or download
Latest commit 2e61c1a Dec 23, 2018
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
imgs initial code commit Jun 15, 2018
mujoco_xmls initial code commit Jun 15, 2018
.gitignore gitignore update Jun 15, 2018
LICENSE adding license Sep 29, 2018
README.md styling Sep 29, 2018
algos.py typo fix Dec 23, 2018
car.py initial code commit Jun 15, 2018
demos.py initial code commit Jun 15, 2018
dynamics.py initial code commit Jun 15, 2018
feature.py initial code commit Jun 15, 2018
input_sampler.py bug fix Jun 15, 2018
kmedoids.py initial code commit Jun 15, 2018
lane.py initial code commit Jun 15, 2018
models.py initial code commit Jun 15, 2018
run.py initial code commit Jun 15, 2018
run_optimizer.py initial code commit Jun 15, 2018
sampling.py typo fix Nov 3, 2018
simulation_utils.py initial code commit Jun 15, 2018
simulator.py initial code commit Jun 15, 2018
trajectory.py initial code commit Jun 15, 2018
utils.py initial code commit Jun 15, 2018
utils_driving.py initial code commit Jun 15, 2018
visualize.py initial code commit Jun 15, 2018
world.py initial code commit Jun 15, 2018

README.md

This code learns reward functions from human preferences in various tasks by actively generating batches of scenarios and querying a human expert.

Companion code to CoRL 2018 paper:
E Bıyık, D Sadigh. "Batch Active Preference-Based Learning of Reward Functions". Conference on Robot Learning (CoRL), Zurich, Switzerland, Oct. 2018.

Dependencies

You need to have the following libraries with Python3:

Running

Throughout this demo,

  • [task_name] should be selected as one of the following: Driver, LunarLander, MountainCar, Swimmer, Tosser
  • [method] should be selected as one of the following: nonbatch, greedy, medoids, boundary_medoids, successive_elimination, random For the details and positive integer parameters K, N, M, b, B; we refer to the publication. You should run the codes in the following order:

Sampling the input space

This is the preprocessing step, so you need to run it only once (subsequent runs will overwrite for each task). It is not interactive and necessary only if you will use batch active preference-based learning. For non-batch version and random querying, you can skip this step.

You simply run

	python input_sampler.py [task_name] K

For quick (but highly suboptimal) results, we recommend K=1000. In the article, we used K=500000.

Learning preference reward function

This is where the actual algorithms work. You can simply run

	python run.py [task_name] [method] N M b

b is required only for batch active learning methods. We fixed B=20b. To change that simply go to demos.py and modify 11th line. Note: N must be divisible by b. After each query or batch, the user will be showed the w-vector learned up to that point. To understand what those values correspond to, one can check the 'Tasks' section of the publication.

Demonstration of learned parameters

This is just for demonstration purposes. run_optimizer.py starts with 3 parameter values. You can simply modify them to see optimized behavior for different tasks and different w-vectors.