This is the official implementation of the paper VGPNet: a Vision-aided GNSS Positioning Framework with Cross-Channel Feature Fusion for Urban Canyons.
- Python 3.6+
- PyTorch 1.7+
- torchvision 0.8+
-
Clone the repository:
git clone https://github.com/hu-xue/VGPNet cd VGPNet -
Install the required packages:
pip install -r requirements.txt
-
Download the pre-trained models and datasets as specified in the paper.
The processed GNSS and fisheye image datasets can be found at: BaiduNet
- Run the training and evaluation scripts as described in the paper.
# Example command to train the model python train.py --config_file config/rw1_train.json # Example command to evaluate the model python predict.py --config_file config/rw2_predict.json
We would like to thank the authors of the following repositories for their open-source contributions, which have been helpful for our work: