Skip to content


Subversion checkout URL

You can clone with
Download ZIP

Fetching latest commit…

Cannot retrieve the latest commit at this time

Failed to load latest commit information.


This file has been generated by

   lynx -nonumbers -dump "" > README.txt 

and then slightly trimmed. It is suggested that you read the Wiki version if 



From OpenStreetMap Wiki

   These scripts are used in conjunction with -O gazetteer mode of
   osm2pgsql to generate a database suitable for geo-coding.


        * 1 Changes
        * 2 Prerequisites
             + 2.1 Software
             + 2.2 PostgreSQL Version
             + 2.3 Hardware
        * 3 First Installation
             + 3.1 Make the database
             + 3.2 Import OSM data
             + 3.3 Build the transliteration module
             + 3.4 Various supplementary data, used to patch holes in OSM
             + 3.5 Create website user
             + 3.6 Add gazetteer functions to database
             + 3.7 Copy the data into the live tables
             + 3.8 Index the database
             + 3.9 Various 'special' words for searching
             + 3.10 Set up the website
        * 4 Updates
             + 4.1 Hourly/Daily Diffs
             + 4.2 Osmosis




     * GCC compiler
     * PostgreSQL
     * Proj4
     * GEOS
     * PostGIS
     * PHP (both apache and command line)
     * PHP-pgsql
     * PEAR::DB
     * wget

   In standard Debian/Ubuntu distributions these should all be available
   as packages.

PostgreSQL Version

   Please be aware that various problems have been found running Nominatim
   on PostgreSQL 8.4. It is currently recomended to use PostgreSQL 8.3
   although there are some reports that version 9.0 might have resolved
   the issues.


   For a full planet install you will need a minimum of 250GB of hard disk
   space. On the OSM Nominatim server
   ( the initial import
   (osm2pgsql) takes around 30 hours, and the the rest of the indexing
   process takes approximately 10 days using both processors in parallel.

   On a 16-core 48 GB machine with fast disks, the initial import takes
   around 4 hours, and the rest of the indexing process ##TBD##.

First Installation

   Note. You may still find the database name "gazetteer" or
   "gazetteerworld" and the user name "twain" hard-coded in some parts.
   When using different names, make sure to grep and change.

Make the database

 createdb gazetteer
 createlang plpgsql gazetteer
 cat /usr/share/postgresql/8.3/contrib/_int.sql | psql gazetteer

   (Install location of /contrib and /postgis directories may differ on
   your machine.)
 cat /usr/share/postgresql/8.3/contrib/pg_trgm.sql | psql gazetteer
 cat /usr/share/postgresql-8.3-postgis/lwpostgis.sql | psql gazetteer

   (lwpostgis.sql is replaced with postgis.sql in newer versions of
 cat /usr/share/postgresql-8.3-postgis/spatial_ref_sys.sql | psql gazetteer

Import OSM data

   First, download a [Planet File].

   Compile osm2pgsql (from unless
   you have already got a package for it:
 cd osm2pgsql

   Load the planet file. The database created in this step is not
   compatible with one you might already have for rendering, created
   without the -O gazetteer option:
 ./osm2pgsql -lsc -O gazetteer -C 2000 -d gazetteer planet.osm.bz2

   Make sure that you have -l (--latlon) and -s (--slim); these are
   required. -C is the cache size in MB. If you have enough memory, set -C
   to 8 times the highest node ID divided by one million (at the time of
   writing, -C 8000 is sure to give you best performance. Higher values do
   not improve performance. If you do not have that much memory, use as
   much as you have.

   You do not have to expand the planet file, osm2pgsql will handle the

   Ignore notices about missing functions and data types.

   If you get a projection initialization error, your proj installation
   can't be found in the expected location. Copying the proj folder to
   /usr/share/ will solve this.

Build the transliteration module

 cd gazetter

   Update gazetteer-functions.sql to give the absolute path to the module,
   replacing /home/twain/osm2pgsql/gazetteer/

Various supplementary data, used to patch holes in OSM data

 cd gazetteer
 cat import_worldboundaries.sql | psql gazetteer
 cat import_country_name.sql | psql gazetteer
 cat import_gb_postcode.sql | psql gazetteer
 cat import_gb_postcodearea.sql | psql gazetteer
 cat import_us_state.sql | psql gazetteer
 cat import_us_statecounty.sql | psql gazetteer

Create website user

   (apache or www-user)
 createuser -SDR www-data

Add gazetteer functions to database

 cat gazetteer-functions.sql | psql gazetteer

   If you get an error "ERROR: type "planet_osm_ways" does not exist",
   that means you are trying to run the script on a non-slim database
   which is not supported; you need to specify -s on first import.

   Ignore any errors about place_boundingbox not existing in the first
 cat gazetteer-tables.sql | psql gazetteer
 cat gazetteer-functions.sql | psql gazetteer

   You really do need to run gazetteer-functions.sql TWICE!

Copy the data into the live tables

   This does the first stange of indexing using various triggers and will
   take a while (for a full planet, somewhere between 10 and 30 hours
   depending on your setup):
 cat gazetteer-loaddata.sql | psql gazetteer

Index the database

   This will take a very long time - up to 10 days for a full planet on a
   small machine.

   For small imports (single country) you can use:
 cat gazetteer-index.sql | psql gazetteer

   For anything large you will need to use:
 ./util.update.php --index

   Be sure to fix the database connection string in that PHP script to
   access your database.

   If you have a multi processor system you can run multiple instances
 ./util.update.php --index --index-instances 3 --index-instance 0
 ./util.update.php --index --index-instances 3 --index-instance 1
 ./util.update.php --index --index-instances 3 --index-instance 2

   If you run these in the background you have to redirect stdin or else
   they will stop. You might also want to consider setting --max-load to
   something close to half your number of cores on a larger machine:
 for i in `seq 0 7`
   ./util.update.php --index --index-instances 8 --index-instance $i --max-load
10 < /dev/null

Various 'special' words for searching

   There is a detailed description in the file itself.
 cat import_specialwords.sql | psql gazetteer

Set up the website

 cp website/* ~/public_html/

   You will need to make sure settings.php is configured to connect to
   your database - edit website/.htlib/settings.php.


   If you want to run continuous updates, you can either use Osmosis to
   download replication diffs or have the util.update.php script load
   hourly or daily diffs.

Hourly/Daily Diffs

   Update the table 'import_status' to reflect the date of your planet
   dump file (you might want to have a days overlap to ensure that no data
   is missed).

   Edit util.update.php to replace /home/twain with the location you wish
   to store your files, then run:
 ./util.update.php --import-daily --import-all --index


Something went wrong with that request. Please try again.