Skip to content

YicongHong/Fine-Grained-R2R

master
Switch branches/tags

Name already in use

A tag already exists with the provided branch name. Many Git commands accept both tag and branch names, so creating this branch may cause unexpected behavior. Are you sure you want to create this branch?
Code

Latest commit

 

Git stats

Files

Permalink
Failed to load latest commit information.
Type
Name
Latest commit message
Commit time
 
 
 
 
 
 
 
 

Fine-Grained R2R

Code and data of the Fine-Grained R2R Dataset proposed in the EMNLP2020 paper Sub-Instruction Aware Vision-and-Language Navigation.

This dataset enriches the benchmark Room-to-Room (R2R) dataset by dividing the instructions into sub-instructions and pairing each of those with their corresponding viewpoints in the path.

  • The copyright resides with the authors of the paper Sub-Instruction Aware Vision-and-Language Navigation.
  • This dataset is build upon the Room-to-Room (R2R) dataset, we refer the readers to its repository for more details.

"The luxury of hope was given to me by the Terminator. Because if a machine can learn the value of human life, maybe we can too." --- Terminator: Judgment Day 1991.

Data

The Fine-Grained R2R data, which enriches the R2R dataset with sub-instructions and their corresponding paths. The overall instruction and trajectory of each sample remains the same.

  • For paths in the train, the validation seen and the validation unseen splits, we add two new entries:

    • new_instructions: A list of sub-instructions produced by the Chunking Function from the complete instructions. You can use import ast and ast.literal_eval() to read it a list.
    • chunk_view: A list of sub-paths corresponding to the sub-instructions, where each number in the list is an index of a viewpoint in the ground-truth path. The index starts at 1.
  • Some sub-instructions which refer to camera rotation or a STOP action could match to a single viewpoint.

  • For the test unseen split, we only provide the sub-instructions but not the sub-paths.

Source

The code of the proposed Chunking Function for generating sub-instructions.

  • Install the StanfordNLP package (v0.1.2 in our experiment) and download the English models for the neural pipeline.

  • Run make_subinstr.py to generate data with sub-instructions from the original R2R data.

  • The generated files had been sent to the Amazon Mechanical Turk (AMT) for annotating the sub-paths.

Reference

If you use or dicsuss the Fine-Grained R2R dataset in your work, please cite our paper:

@article{hong2020sub,
  title={Sub-Instruction Aware Vision-and-Language Navigation},
  author={Hong, Yicong and Rodriguez-Opazo, Cristian and Wu, Qi and Gould, Stephen},
  journal={arXiv preprint arXiv:2004.02707},
  year={2020}
}

Contact

If you have any question regarding the dataset or publication, please create an issue in this repository or email to yicong.hong@anu.edu.au.

About

Code and data of the Fine-Grained R2R Dataset proposed in the EMNLP 2021 paper Sub-Instruction Aware Vision-and-Language Navigation

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages