This project is a collection of tools for working with OpenStreetMap (OSM). It is built to enable large scale batch analytic jobs to run on the latest OSM data, as well as streaming jobs which operate on updated with minutely replication files.
This library is a toolkit meant to make the munging and manipulation of OSM data a simpler affair than it would otherwise be. Nevertheless, a significant degree of domain-specific knowledge is necessary to profitably work with OSM data. Prospective users would do well to study the OSM data-model and to develop an intuitive sense for how the various pieces of the project hang together to enable an open-source, globe-scale map of the world.
If you're already fairly comfortable with OSM's model, running one of the diagnostic (console printing/debugging) Spark Streaming applications provided in the analytics subproject is probably the quickest way to explore Spark SQL and its usage within this library. To run the change stream processor application from the beginning of (OSM) time and until cluster failure or user termination, try this:
# head into the 'src' directory cd src # build the jar we'll be submitting to spark sbt "project analytics" assembly # submit the streaming application to spark for process management spark-submit \ --class osmesa.analytics.oneoffs.ChangeStreamProcessor \ ./analytics/target/scala-2.11/osmesa-analytics.jar \ --start-sequence 1
Utilities are provided in the deployment directory to bring up cluster and enable you to push the OSMesa jar to that cluster. The spawned EMR cluster comes with Apache Zeppelin enabled, which allows jars to be registered/loaded for a console-like experience similar to Jupyter or IPython notebooks but which will execute spark jobs across the entire spark cluster. Actually wiring up Zeppelin to use OSMesa sources is beyond the scope of this document, but it is a relatively simple configuration.
Summary statistics aggregated at the user and hashtag level that are supported by OSMesa:
- Number of added buildings
- Number of modified buildings
- Number of added roads
- Number of modified roads
- Km of added roads
- Km of modified roads
- Number of added waterways
- Number of modified waterways
- Km of added waterways
- Km of modified waterways
- Number of added coastlines
- Number of modified coastlines
- Km of added coastline
- Km of modified coastline
- Number of added points of interest
- Number of modified points of interest
Statistics calculation, whether batch or streaming, updates a few tables that jointly can be used to discover user or hashtag stats. These are the schemas of the tables being updated.
These tables are fairly normalized and thus not the most efficient for directly serving statistics. If that's your goal, it might be useful to create materialized views for any further aggregation. A couple example queries that can serve as views are provided: hashtag_statistics and user_statistics
- ChangesetStats will produce an ORCfile with statistics aggregated by changeset
updates the tables
updates the tables
- ChangeStreamProcessor prints out changes to the console (primarily for debugging)
- MergedChangesetStreamProcessor prints out changesets to the console (primarily for debugging)
Vector tiles, too, are generated in batch and via streaming so that a fresh set can be quickly generated and then kept up to date. Summary vector tiles are produced for two cases: to illustrate the scope of a user's contribution and to illustrate the scope of a hashtag/campaign within OSM
- FootprintByCampaign produces a z/x/y stack of vector tiles corresponding to all changes marked with a given hashtag
- FootprintByUser produces a z/x/y stack of vector tiles which correspond to a user's modifications to OSM