Skip to content

Microbatch process of spark backend to produce OSRM-compatible CSV files

Notifications You must be signed in to change notification settings

TeamMX/osrm-adapter-batch

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

9 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

osrm-adapter-batch

This repository contains the code for the micro-batch processes run to generate CSV files to be imported by OSRM. See how OSRM handles traffic updates.

This project uses mongodb spark-connector.

For development, it is recommended to use VSCode with Scala Syntax (official) and Scala (Metals) extensions for intellisense.

Usage: spark-submit --packages org.mongodb.spark:mongo-spark-connector_2.11:2.4.1 --class "OsrmAdapterBatch" mongodb://localhost:27017/database.collection file-path-in-hdfs.csv

To download the produced file as a single csv to the local machine, use the command hadoop fs -getmerge file-path-in-hdfs.csv ./local-file-path.csv

About

Microbatch process of spark backend to produce OSRM-compatible CSV files

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Languages