Skip to content
Thomas Schwotzer edited this page May 11, 2020 · 39 revisions

Overview

OHDM uses several databases can create a number of export formats. Here is an overview:

OHDM databases and supported formats The yellow components are provided by OHDM. That software is under GPL 3.0 and can be found in the OHDM Software Repository. OHDM uses and archives OSM data. We work on a web import application.

OHDM provides several output formats:

  • OSM files. OHDM can produce an OSM file for a given date and a given region on earth. Those files can be e.g. used for offline viewers.
  • stRDF files. Spatial temporal RDF is an extension of RDF which supports geometries.
  • Mapnik: OHDM can produce the same files as osm2psql which are used e.g by mapnik. Those files have two additional date columns (since, until) declaring validaty of the objects. There is a patched Mapnik which understands both columns and can produce time sensitiv maps.

The transformation of all those structures are done by a Java program. It is recommended to use the jar. The rest of this section explains the parameters of this program.

Related Projects

Import Tool is a web application that offers a simple interface to upload (historic) geometries, attach required attributes (validity time, classification (board, building etc.)

Android Offline Viewer is an Android app. It is an offline renderer / viewer for OSM files specifically tailored for use with OHDM.

Parameters

  • -o [file name] defines osm file name. This can either be an input or output file
  • -i [parameter file name] defines name of a parameter file which describes the intermediate database

This paramater file has the following structure:

servername:your_server_dns_name_or_ip_address
portnumber:port_number
username:a_user_name_with_appropriate_rights
pwd:user_passwort
dbname:database_name
schema:schema_for_intermediate_database

Note: In this version: all databases MUST be located on the same database server

  • -u [parameter file name] declares update database
  • -d [parameter file name] declares OHDM database
  • -r [parameter file name] declares OHDM rendering database Parameter file requires an additional line:

..
renderoutput:[generic | all | v1]
  • generic produces tables for each map feature

  • all and v1 are map versions of OHDM which should be described somewhere else (sorry, working on it) just try - no harm will be done, a few tables are created

  • -m [parameter file name] declares OHDM mapnik database

  • -p [WKT-Polygon-String] an EPSG 4326 polygon described in well-known-text-format (like "POLYGON((10 45, 10 55, 15 55, 15 45, 10 45))")

  • -t [Date-String] a date string like "2117-12-11" (quite an interesting date btw.)

  • -s [filename] stRDF output file name. Note: that file is created / overwritten.

Parameter combinations and actions

Combinations of parameters produce different output. Here is an overview of some combinations:

Parameters Action
-o -i Imports an OSM file into intermediate database
-i -d Drops, recreates OHDM database and fills with data from intermediate database (can be very time consuming)
-o -i -d combination: sets up a fresh OHDM database from an OSM file
-o -i -u -d Imports an OSM file into update database and starts update process of the OHDM/OSM archive
-d -r creates rendering tables from OHDM
-r -m creates mapnik (osm2psql) compatible tables from OHDM rendering tables
-r -p -t -o creates an OSM file containing data which are valid at given date and within given polygon. Note: Parameter -o declares an output file. This file is created / overwritten.
-r -p -s creates an stRDF file containing data which are valid within given polygon

Examples

java -jar OHDMConverter.jar -o planet.osm -i db_inter.txt -d db_ohdm.txt -r db_rendering
java -jar OHDMConverter.jar -o ohdm_extracted.osm -r db_rendering.txt -p "POLYGON((10 45, 10 55, 15 55, 15 45, 10 45))" -t "2016-12-31""
java -jar OHDMConverter.jar -r db_rendering.txt -s stRDF.out.txt -p "POLYGON((10 45, 10 55, 15 55, 15 45, 10 45))"

Initial / update import from OSM

OHDMConverter converts data. Some processing can be performed in parallel - some can not. Reading or writing an OSM file cannot be parallelized. Filling the OHDM database from an intermediate database can. The chunk reader feature was introduced to reduce that very long lasting process.

  • -buildimportcmd .. forces the converter to produce commands lines instead of performing a command

Each command processes a number of lines (chunks) from intermediate database. We call it chunk processing.

It can be followed by some optional parameters

  • -parallel [#processes] .. creates commands that launch up to a given number of parallel processes. Rule of thumb: Half of your cpu cores if you don't plan to do anything else on that machine. (Just half of it. Database requires resources as well.)
  • -size [number] .. size of each chunk - default is 10.000. Increase this number if you have a fast computer.
  • -classpath [jarfiles] .. required jarfiles (e.g. jdbc)

Result is produced over standard out. It is a good idea to redirect this output into a file.

Example

java -jar OHDMConverter.jar -buildimportcmd -parallel 2 -size 100000 -i db_inter.txt -d db_ohdm.txt

would produce an output like this:

java -jar OHDMConverter.jar -chunkprocess -i db_inter.txt -d db_ohdm.txt -reset -from 1 -to 100000 -nodes 1>> chunkImport.log 2>> chunkImport.err

java -jar OHDMConverter.jar -chunkprocess -i db_inter.txt -d db_ohdm.txt -from 100001 -to 200000 -nodes 1>> chunkImport.log 2>> chunkImport.err &

java -jar OHDMConverter.jar -chunkprocess -i db_inter.txt -d db_ohdm.txt -from 200001 -to 300000 -nodes 1>> chunkImport.log 2>> chunkImport.err

java -jar OHDMConverter.jar -chunkprocess -i db_inter.txt -d db_ohdm.txt -from 300001 -to 400000 -nodes 1>> chunkImport.log 2>> chunkImport.err &

..

java -jar OHDMConverter.jar -chunkprocess -i db_inter.txt -d db_ohdm.txt -from 1 -to 100000 -ways 1>> chunkImport.log 2>> chunkImport.err &

...

Put it into a file and use it as shell command. Up to two processes work in parallel in this example (-parallel 2). Depending on you hardware, this can be much faster. Note: Relation import cannot be parallized.