This is the source code for our ICCV2019 paper, which implements a visual navigation agent with a Bayesian relational memory over semantic concepts in the House3D environment.
check out our paper here.
Bibtex:
@inproceedings{wu2019bayesian,
title={Bayesian Relational Memory for Semantic Visual Navigation},
author={Wu, Yi and Wu, Yuxin and Tamar, Aviv and Russell, Stuart and Gkioxari, Georgia and Tian, Yuandong},
booktitle={Proceedings of the 2019 IEEE International Conference on Computer Vision (ICCV)},
year={2019}
}
Our project developed a customized, C++ re-implementation of the House3D environment, which is much faster, consumes orders of magnitudes less memory, and provides much more APIs for task analysis and auxiliary training.
For task and environment details, please follow the original House3D paper.
The required PyTorch version is 0.3.1.
Note: the policies where trained under PyTorch 0.2.0. In order to guarantee the full reproducibility of our results, please switch to PyTorch 0.2.0 (with very tiny API changes).
It will raise run-time errors in pytorch 0.3.0. Make sure to avoid this version! The code will be kept as it is now and no further package upgrade will be performed.
- Python version 3.6.
- These packages are required:
numpy, pytorch=0.3.1, gym, matplotlib, opencv, msgpack, msgpack_numpy
. - Set the House3D path properly by generating your own
config.json
file (seeconfig.json.example
as an example). - (optional) See
config.py
and ensure all metadata files,all_house_ids.json
(all house ids) andall_house_targets.json
(semantic target types), are properly set. - Stay in the root folder and ensure the following two sanity check scripts can be properly run:
a. training test:python3 release/scripts/sanity_check/test_train.py
b. evaluation test:python3 release/scripts/sanity_check/test_eval.py
All the scripts and trained policies are stored in release folder. To run scripts, ensure that the scripts are run from the root folder (i.e., under HouseNavAgent
folder).
- For evaluating BRM agents and all the related baselines, check the scripts/eval folder.
- For re-training the polices, check the scripts/train folder.
- To create an environment instance, refer to the
create_env(...)
function incommon.py
. - For evaluation, refer to
HRL/eval_HRL.py
. Here is the reference evaluation script. See here for descriptions of all the command line options. - For parallel A2C training, refer to
zmq_train.py
. Here is the reference training script. See here for descriptions of all the command line options.