Skip to content
Code accompanying the CVPR 2019 paper: https://arxiv.org/abs/1812.04155
C++ Python Shell CMake
Branch: master
Clone or download
Latest commit a898159 Sep 15, 2019
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
code Update README.md Apr 3, 2019
data Update README.md Sep 15, 2019
teaser update Jun 7, 2019
.gitignore update Dec 16, 2018
.gitmodules update Apr 1, 2019
CONTRIBUTING.md Create CONTRIBUTING.md Mar 28, 2019
LICENSE.txt Create LICENSE.txt Mar 28, 2019
NOTICE.md update Apr 1, 2019
README.md Update README.md Jun 7, 2019

README.md

Vision-based Navigation with Language-based Assistance via Imitation Learning with Indirect Intervention

License: MIT

Authors: Khanh Nguyen, Debadeepta Dey, Chris Brockett, Bill Dolan.

This repo contains code and data-downloading scripts for the paper Vision-based Navigation with Language-based Assistance via Imitation Learning with Indirect Intervention (CVPR 2019). We present Vision-based Navigation with Language-based Assistance (VNLA, pronounced as "Vanilla"), a grounded vision-language task where an agent with visual perception is guided via language to find objects in photorealistic indoor environments.

IMAGE ALT TEXT HERE

Development system

Our instructions assume the followings are installed:

See setup simulator for packages required to install the Matterport3D simulator.

The Ubuntu requirement is not mandatory. As long as you can sucessfully Anaconda, PyTorch and other required packages, you are good!

Let's play with the code!

  1. Clone this repo git clone --recursive https://github.com/debadeepta/vnla.git (don't forget the recursive flag!)
  2. Download data.
  3. Setup simulator.
  4. Run experiments.
  5. Extend this project.

Please create a Github issue or email kxnguyen@cs.umd.edu, dedey@microsoft.com for any question or feedback.

FAQ

Q: What's the difference between this task and the Room-to-Room task?

A: In R2R, the agent's task is given by a detailed language instruction (e.g., "Go the table, turn left, walk to the stairs, wait there"). The agent has to execute the instruction without additional assistance.

In VNLA (our task), the task is described as a high-level end-goal (the steps for accomplishing the task are not described) (e.g., "Find a cup in the kitchen"). The agent is capable of actively requesting additional assistance (in the form of language subgoals) while trying to fulfill the task.

Citation

If you want to cite this work, please use the following bibtex code

@InProceedings{nguyen2019vnla,
author = {Nguyen, Khanh and Dey, Debadeepta and Brockett, Chris and Dolan, Bill},
title = {Vision-Based Navigation With Language-Based Assistance via Imitation Learning With Indirect Intervention},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2019}
}
You can’t perform that action at this time.