Indoor Navigation via Vision-Inertial Data Fusion (IEEE/ION2018)
Switch branches/tags
Nothing to show
Clone or download
Latest commit b18b1a7 Jul 30, 2018
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
figs
sample_video second commit Jul 28, 2018
README.md Update README.md Jul 30, 2018
ThetaCorrect.m my first commit Nov 8, 2017
demo_vpdetect_modular.m second commit Jul 28, 2018
find_xy.m my first commit Nov 8, 2017
func_vpdetect4.m my first commit Nov 8, 2017
get_straight_line_segments.m my first commit Nov 8, 2017
iPhone_IMU_reading.m my first commit Nov 8, 2017
read_acc_gyro.m my first commit Nov 8, 2017

README.md

Indoor Navigation via Vision-Inertial Data Fusion

This is the code for the following paper:

Farnoosh, A., Nabian, M., Closas, P., & Ostadabbas, S. (2018, April). First-person indoor navigation via vision-inertial data fusion. In Position, Location and Navigation Symposium (PLANS), 2018 IEEE/ION (pp. 1213-1222). IEEE.

Algorithm Result

Contact: Amirreza Farnoosh, Sarah Ostadabbas

Contents

1. Requirement

This code is written with MATLAB R2016b

2. iPhone App for Collecting Video-IMU

Contact Sarah Ostadabbas to request access to our iPhone App for collecting synchronous video and IMU data with adjustable frequency

2. Sample Video

The original video of the hallway used for experiments in the paper along with its IMU measurements collected with our iPhone App is included in ./sample_video/ directory.

3. Running Code for Hallway Video

Run demo_vpdetect_modular.m

This code contains the following sections:

  • Read entire video
  • Read IMU data
  • Synchonize IMU and video (if not)
  • Apply GMM Method on each frame
  • Straight line grouping
  • Find alpha, beta and gamma for each frame from vanishing directions
  • Kalman filter fusion of IMU and video
  • Horizon line detection
  • Plane detection & depth/width inference
  • Step counting & finding step locations
  • 2D-map generation

Citation

If you find our work useful in your research please consider citing our paper:

@inproceedings{farnoosh2018first,
  title={First-person indoor navigation via vision-inertial data fusion},
  author={Farnoosh, Amirreza and Nabian, Mohsen and Closas, Pau and Ostadabbas, Sarah},
  booktitle={Position, Location and Navigation Symposium (PLANS), 2018 IEEE/ION},
  pages={1213--1222},
  year={2018},
  organization={IEEE}
}

License

  • This code is for non-commercial purpose only. For other uses please contact ACLab of NEU.
  • No maintenance service