Estimate road layout attributes given street view imagery
Switch branches/tags
Nothing to show
Clone or download
Latest commit 1c4c6cc Mar 18, 2017
Permalink
Failed to load latest commit information.
dependencies first commit Nov 28, 2016
panos first commit Nov 28, 2016
LICENSE.md first commit Nov 28, 2016
README.md Update README.md Mar 19, 2017
gsv_osm_corr.m first commit Nov 28, 2016
main.m first commit Nov 28, 2016
map.osm first commit Nov 28, 2016
tensorPrep.m first commit Nov 28, 2016

README.md

Learning from Maps: Visual Common Sense for Autonomous Driving

Given a street view image, our model learns to estimate a set of driving-relevant road layout attributes. The ground truth attribute labels for model training are automatically extracted from OpenStreetMap.

Project page: http://www.cs.princeton.edu/~aseff/mapnet

PDF: https://arxiv.org/pdf/1611.08583.pdf

Citation

@article{seffxiao2016,
  title={Learning from Maps: Visual Common Sense for Autonomous Driving},
  author={Seff, Ari and Xiao, Jianxiong},
  journal={arXiv preprint arxiv:1611.08583},
  year={2016}
}

Requirements

  • Python 2.7 or later
  • Matlab
  • Marvin

Instructions

main.m demonstrates the full pipeline for downloading images from Google Street View, establishing correspondence with OpenStreetMap roads for label extraction, and training models for road attribute estimation.

Dataset and pre-trained networks: The dataset consisting of Google Street View panoramas and ground truth road attribute labels as well as pre-trained networks are available for download from http://www.cs.princeton.edu/~aseff/mapnet