Instructions for compiling OSRM can be found here.
Instructions for compiling and running OSRM on a Amazon EC2 Micro Instance here.
Exported OSM data files can be obtained from providers such as Geofabrik. OSM data comes in a variety of formats including XML and PBF, and contain a plethora of data. The data includes information which is irrelevant to routing, such as positions of public waste baskets. Also, the data does not conform to a hard standard and important information can be described in various ways. Thus it is necessary to extract the routing data into a normalized format. This is done by the OSRM tool named extractor. It parses the contents of the exported OSM file and writes out a file with the suffix .osrm which contains the routing data, a file with the suffix .osrm.restrictions which contains restrictions to make certain turns during navigation, and, a file with the suffix .osrm.names which contains the names of all the roads.
The speed profile (profile.lua) is used during this process to determine what can be routed along, and what cannot (private roads, ...). In order to get a proper profile.lua, make a symbolic link being inside your build directory:
ln -s ../profiles/car.lua profile.lua
It might be also necessary to add a symlink to the profile-lib directory inside your build directory:
ln -s ../profiles/lib/
Suppose that you download an OSM data file with the name map.osm (an XML file with an .osm extension). You would extract this file using OSRM with the following command:
Extracting a file which contains data pertaining to the entire planet will take a few hours, mostly depending on the speed of the harddisks. On a Core i7 with 8GB RAM and (slow) 5400 RPM Samsung SATA hard disks it took about 65 minutes to do so from a PBF formatted planet. Please note that your mileage may vary and the faster your disks and processory the faster it will be. SSDs are certainly an advantage here. Most of the data is kept on disk, because RAM is scarce and extracting the data of a planet file will take up dozens of gigabytes of RAM. Currently about 35 GB or so. The tool is also able to handle bzip2 compressed files as well as PBF files:
./osrm-extract map.osm.bz2 and
./osrm-extract map.osm.pbf will work fine. PBF is the generally the better choice. Note that preprocessing the planet, i.e. the next step, is much more resource-hungry.
External memory accesses are handles by the stxxl library. Although you can run the code without any special configuration you might see a warning similar to
[STXXL-ERRMSG] Warning: no config file found. Given you have enough free disk space, you can happily ignore the warning or create a config file called
.stxxl in the same directory where the extractor tool sits. The following is taken from the stxxl manual:
You must define the disk configuration for an STXXL program in a file named '.stxxl' that must reside in the same directory where you execute the program. You can change the default file name for the configuration file by setting the enviroment variable STXXLCFG .
Each line of the configuration file describes a disk. A disk description uses the following format: disk=full_disk_filename,capacity,access_method
An example config file looks like this: disk=/tmp/stxxl,25000,syscall
It is generally a good idea to have the planet file on a seperate partition since this avoids a large number of concurrent (read: slow, thanks Dennis S.) read/write disk accesses.
The so-called Hierarchy is a lot of precomputed data, that enables the routing engine to find shortest path within short time. It is created by the the command-line
where map.osrm is the extracted road network and map.osrm.restrictions is a set of turn restrictions. Both are generated by the previous step. A nearest-neighbor data structure and a node map are created alongside the hierarchy. Once computation has finished, there should be another four files: map.osrm.hsgr (the Hierarchy), map.osrm.nodes (the nodemap), map.osrm.ramIndex (stage 1 index), map.osrm.fileIndex (stage 2 index).
See also the config files page for additional parameters to configure the extractor.
If you run the command
without any parameters, it is fully controlled by a config file named server.ini. A sample server.ini file is as follows
Threads = 4 IP = 0.0.0.0 Port = 5000 hsgrData=./map.osrm.hsgr nodesData=./map.osrm.nodes edgesData=./map.osrm.edges ramIndex=./map.osrm.ramIndex fileIndex=./map.osrm.fileIndex namesData=./map.osrm.names
See also the config files page for the parameters. If you run
it will display all command-line options to manually specify any files.
We just released OSRM v0.3.7 with huge improvements for running OSRM in a high-availability production environment. With all these changes, you should load all the shared memory directly into your RAM. It's as easy as:
If there is insufficient available RAM (or not enough space configured), you will receive the following warning when loading data with osrm-datastore:
[warning] could not lock shared memory to RAM
In this case, data will be swapped to a cache on disk, and you will still be able to run queries. But note that caching comes at the price of disk latency. Again, consult the Wiki for instructions on how to configure your production environment. Starting the routing process and pointing it to shared memory is also very, very easy:
See this for more information.
OSRM comes with a number of tests using the cucumber framework. Further information on how to run the tests can be found here.
Last edited by Gowtham,