Open Multimodal Place Recognition
How to run the code?
Prepare the environment and the dependencies on your computer, then download the source code, the additional files and the dataset. Before running the code, you may change the settings in the configuration file.
If you are using this code in your research, please cite the paper:
Cheng, Ruiqi, et al. "OpenMPR: Recognize places using multimodal data for people with visual impairments." Measurement Science and Technology (2019). https://doi.org/10.1088/1361-6501/ab2106
Visual Studio 2017 on Windows 10
The pre-trained model of CNN could be downloaded at GoogLeNet-Places365, which should be unzipped to the source code folder.
The dataset is available at Multimodal Dataset.
The configuration file
Config.yaml is in the folder of
OpenMultiPR. The detailed information of the parameters could be found in
In the yaml file, the dataset and BoW vocabulary paths are assigned. The configuration files also includes the parameters on wheter to use the specific modalities (i.e. RGB, Infrared, Depth and GNSS data) and the corresponding descriptors (i.e. GIST, ORB-BoW, LDB). The running mode could be set in the file.