Skip to content
/ NeoNav Public

NeoNav: Improving the Generalization of Visual Navigation via Generating Next Expected Observations

Notifications You must be signed in to change notification settings

wqynew/NeoNav

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

32 Commits
 
 
 
 
 
 
 
 

Repository files navigation

NeoNav

This is the implementation of our AAAI 2020 paper NeoNav: Improving the Generalization of Visual Navigation via Generating Next Expected Observations, training and evaluation on Active Vision Dataset (depth only).

Navigation Model

Implementation

Training

  • The environment: Cuda 10.0, Python 3.6.4, PyTorch 1.0.1
  • Please download "depth_imgs.npy" file from the AVD_Minimal and put the file in the train folder.
  • Please download our training data HERE.
  • Our trained model can be downloaded from HERE. If you plan to train your own navigation model from scratch, some suggestions are provided:
    • Pre-train the model by using "python3 ttrain.py" and terminate the training when the action prediction accuracy approaches 70%.
    • Use "python3 train.py" to train the NeoNav model.

Testing

  • To evaluate our model, please run "python3 ./test/evaluate.py" or "python3 ./test/evaluete_with_stop.py"

Results

Start End Start End

Contact

To ask questions or report issues please open an issue on the issues tracker.

Citation

If you use NeoNav in your research, please cite the paper:

@article{wuneonav,
  title={NeoNav: Improving the Generalization of Visual Navigation via Generating Next Expected Observations},
  author={Wu, Qiaoyun and Manocha, Dinesh and Wang, Jun and Xu, Kai}
}

About

NeoNav: Improving the Generalization of Visual Navigation via Generating Next Expected Observations

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages