Skip to content

HTTPS clone URL

Subversion checkout URL

You can clone with
or
.
Download ZIP
Browse files

This commit was manufactured by cvs2svn to create tag 'REL_1_2_1_RC1'.

  • Loading branch information...
commit 9ba2f13d1885388bfe3e2dbbd41cdd380721c83b 1 parent 02dea59
cvs2svn authored
Showing with 0 additions and 330 deletions.
  1. +0 −187 tools/configure-replication.sh
  2. +0 −143 tools/configure-replication.txt
View
187 tools/configure-replication.sh
@@ -1,187 +0,0 @@
-#!/bin/bash
-# $Id: configure-replication.sh,v 1.1 2006-10-20 16:01:01 cbbrowne Exp $
-
-# Global defaults
-CLUSTER=${CLUSTER:-"slonytest"}
-NUMNODES=${NUMNODES:-"2"}
-
-# Defaults - origin node
-DB1=${DB1:-${PGDATABASE:-"slonytest"}}
-HOST1=${HOST1:-`hostname`}
-USER1=${USER1:-${PGUSER:-"slony"}}
-PORT1=${PORT1:-${PGPORT:-"5432"}}
-
-# Defaults - node 2
-DB2=${DB2:-${PGDATABASE:-"slonytest"}}
-HOST2=${HOST2:-"backup.example.info"}
-USER2=${USER2:-${PGUSER:-"slony"}}
-PORT2=${PORT2:-${PGPORT:-"5432"}}
-
-# Defaults - node 3
-DB3=${DB3:-${PGDATABASE:-"slonytest"}}
-HOST3=${HOST3:-"backup3.example.info"}
-USER3=${USER3:-${PGUSER:-"slony"}}
-PORT3=${PORT3:-${PGPORT:-"5432"}}
-
-# Defaults - node 4
-DB4=${DB4:-${PGDATABASE:-"slonytest"}}
-HOST4=${HOST4:-"backup4.example.info"}
-USER4=${USER4:-${PGUSER:-"slony"}}
-PORT4=${PORT4:-${PGPORT:-"5432"}}
-
-# Defaults - node 5
-DB5=${DB5:-${PGDATABASE:-"slonytest"}}
-HOST5=${HOST5:-"backup5.example.info"}
-USER5=${USER5:-${PGUSER:-"slony"}}
-PORT5=${PORT5:-${PGPORT:-"5432"}}
-
-store_path()
-{
-
-echo "include <${PREAMBLE}>;" > $mktmp/store_paths.slonik
- i=1
- while : ; do
- eval db=\$DB${i}
- eval host=\$HOST${i}
- eval user=\$USER${i}
- eval port=\$PORT${i}
-
- if [ -n "${db}" -a "${host}" -a "${user}" -a "${port}" ]; then
- j=1
- while : ; do
- if [ ${i} -ne ${j} ]; then
- eval bdb=\$DB${j}
- eval bhost=\$HOST${j}
- eval buser=\$USER${j}
- eval bport=\$PORT${j}
- if [ -n "${bdb}" -a "${bhost}" -a "${buser}" -a "${bport}" ]; then
- echo "STORE PATH (SERVER=${i}, CLIENT=${j}, CONNINFO='dbname=${db} host=${host} user=${user} port=${port}');" >> $mktmp/store_paths.slonik else
- err 3 "No conninfo"
- fi
- fi
- if [ ${j} -ge ${NUMNODES} ]; then
- break;
- else
- j=$((${j} + 1))
- fi
- done
- if [ ${i} -ge ${NUMNODES} ]; then
- break;
- else
- i=$((${i} +1))
- fi
- else
- err 3 "no DB"
- fi
- done
-}
-
-mktmp=`mktemp -d -t slonytest-temp.XXXXXX`
-if [ $MY_MKTEMP_IS_DECREPIT ] ; then
- mktmp=`mktemp -d /tmp/slonytest-temp.XXXXXX`
-fi
-
-PREAMBLE=${mktmp}/preamble.slonik
-
-echo "cluster name=${CLUSTER};" > $PREAMBLE
-
-alias=1
-
-while : ; do
- eval db=\$DB${alias}
- eval host=\$HOST${alias}
- eval user=\$USER${alias}
- eval port=\$PORT${alias}
-
- if [ -n "${db}" -a "${host}" -a "${user}" -a "${port}" ]; then
- conninfo="dbname=${db} host=${host} user=${user} port=${port}"
- echo "NODE ${alias} ADMIN CONNINFO = '${conninfo}';" >> $PREAMBLE
- if [ ${alias} -ge ${NUMNODES} ]; then
- break;
- else
- alias=`expr ${alias} + 1`
- fi
- else
- break;
- fi
-done
-
-# The following schema is based on that of LedgerSMB
-
-ALTTABLES1="acc_trans ap ar assembly audittrail business chart
- custom_field_catalog custom_table_catalog customer customertax
- defaults department dpt_trans employee exchangerate gifi gl inventory
- invoice jcitems language makemodel oe orderitems parts partscustomer
- partsgroup partstax partsvendor pricegroup project recurring
- recurringemail recurringprint session shipto sic status tax
- transactions translation vendor vendortax warehouse yearend"
-
-for t in `echo $ALTTABLES1`; do
- ALTTABLES="$ALTTABLES public.${t}"
-done
-
-ALTSEQUENCES1="acc_trans_entry_id_seq audittrail_entry_id_seq
- custom_field_catalog_field_id_seq custom_table_catalog_table_id_seq
- id inventory_entry_id_seq invoiceid jcitemsid orderitemsid
- partscustomer_entry_id_seq partsvendor_entry_id_seq
- session_session_id_seq shipto_entry_id_seq"
-
-for s in `echo $ALTSEQUENCES1`; do
- ALTSEQUENCES="$ALTSEQUENCES public.${s}"
-done
-
-TABLES=${TABLES:-${ALTTABLES}}
-SEQUENCES=${SEQUENCES:-${ALTSEQUENCES}}
-
-SETUPSET=${mktmp}/create_set.slonik
-
-echo "include <${PREAMBLE}>;" > $SETUPSET
-echo "create set (id=1, origin=1, comment='${CLUSTER} Tables and Sequences');" >> $SETUPSET
-
-tnum=1
-
-for table in `echo $TABLES`; do
- echo "set add table (id=${tnum}, set id=1, origin=1, fully qualified name='${table}', comment='${CLUSTER} table ${table}');" >> $SETUPSET
- tnum=`expr ${tnum} + 1`
-done
-
-snum=1
-for seq in `echo $SEQUENCES`; do
- echo "set add sequence (id=${snum}, set id=1, origin=1, fully qualified name='${seq}', comment='${CLUSTER} sequence ${seq}');" >> $SETUPSET snum=`expr ${snum} + 1`
-done
-
-NODEINIT=$mktmp/create_nodes.slonik
-echo "include <${PREAMBLE}>;" > $NODEINIT
-echo "init cluster (id=1, comment='${CLUSTER} node 1');" >> $NODEINIT
-
-node=2
-while : ; do
- SUBFILE=$mktmp/subscribe_set_${node}.slonik
- echo "include <${PREAMBLE}>;" > $SUBFILE
- echo "store node (id=${node}, comment='${CLUSTER} subscriber node ${node}');" >> $NODEINIT
- echo "subscribe set (id=1, provider=1, receiver=${node}, forward=yes);" >> $SUBFILE
- if [ ${node} -ge ${NUMNODES} ]; then
- break;
- else
- node=`expr ${node} + 1`
- fi
-done
-
-store_path
-
-echo "
-$0 has generated Slony-I slonik scripts to initialize replication for SlonyTest.
-
-Cluster name: ${CLUSTER}
-Number of nodes: ${NUMNODES}
-Scripts are in ${mktmp}
-=====================
-"
-ls -l $mktmp
-
-echo "
-=====================
-Be sure to verify that the contents of $PREAMBLE very carefully, as
-the configuration there is used widely in the other scripts.
-=====================
-====================="
View
143 tools/configure-replication.txt
@@ -1,143 +0,0 @@
-README
-------------
-$Id: configure-replication.txt,v 1.1 2006-10-20 16:01:01 cbbrowne Exp $
-Christopher Browne
-cbbrowne@ca.afilias.info
-2006-10-20
-------------
-
-The script configure-replication.sh is intended to allow the gentle
-user to readily configure replication using the Slony-I replication
-system for PostgreSQL. This script is based on the configuration
-approach taken by the testbed scripts in slony1-engine/tests. A
-customized version of this has been implemented for LedgerSMB
-<http://ledgersmb.org/>.
-
-For more general details about Slony-I, see <http://slony.info/>
-
-This script uses a number of environment variables to determine the
-shape of the configuration. In many cases, the defaults should be at
-least nearly OK...
-
-Global:
- CLUSTER - Name of Slony-I cluster
- NUMNODES - Number of nodes to set up
-
- PGUSER - name of PostgreSQL superuser controlling replication
- PGPORT - default port number
- PGDATABASE - default database name
-
- TABLES - a list of fully qualified table names (e.g. - complete with
- namespace, such as public.my_table)
- SEQUENCES - a list of fully qualified sequence names (e.g. - complete with
- namespace, such as public.my_sequence)
-
-Defaults are provided for ALL of these values, so that if you run
-configure-replication.sh without setting any environment variables,
-you will get a set of slonik scripts. They may not correspond, of
-course, to any database you actually want to use...
-
-For each node, there are also four parameters; for node 1:
- DB1 - database to connect to
- USER1 - superuser to connect as
- PORT1 - port
- HOST1 - host
-
-It is quite likely that DB*, USER*, and PORT* should be drawn from the
-default PGDATABASE, PGUSER, and PGPORT values above; that sort of
-uniformity is usually a good thing.
-
-In contrast, HOST* values should be set explicitly for HOST1, HOST2,
-..., as you don't get much benefit from the redundancy replication
-provides if all your databases are on the same server!
-
-slonik config files are generated in a temp directory under /tmp. The
-usage is thus:
-
-1. preamble.slonik is a "preamble" containing connection info used by
- the other scripts.
-
- Verify the info in this one closely; you may want to keep this
- permanently to use with other maintenance you may want to do on the
- cluster.
-
-2. create_set.slonik
-
- This is the first script to run; it sets up the requested nodes as
- being Slony-I nodes, adding in some Slony-I-specific config tables
- and such.
-
-You can/should start slon processes any time after this step has run.
-
-3. store_paths.slonik
-
- This is the second script to run; it indicates how the slons
- should intercommunicate. It assumes that all slons can talk to
- all nodes, which may not be a valid assumption in a
- complexly-firewalled environment.
-
-4. create_set.slonik
-
- This sets up the replication set consisting of the whole bunch of
- tables and sequences that make up your application's database
- schema.
-
- When you run this script, all that happens is that triggers are
- added on the origin node (node #1) that start collecting updates;
- replication won't start until #5...
-
- There are two assumptions in this script that could be invalidated
- by circumstances:
-
- 1. That all of the tables and sequences have been included.
-
- This becomes invalid if new tables get added to your schema
- and don't get added to the TABLES list in the generator
- script.
-
- 2. That all tables have been defined with primary keys.
-
- This *should* be the case soon if not already.
-
-5. subscribe_set_2.slonik
-
- And 3, and 4, and 5, if you set the number of nodes higher...
-
- This is the step that "fires up" replication.
-
- The assumption that the script generator makes is that all the
- subscriber nodes will want to subscribe directly to the origin
- node. If you plan to have "sub-clusters", perhaps where there is
- something of a "master" location at each data centre, you may
- need to revise that.
-
- The slon processes really ought to be running by the time you
- attempt running this step. To do otherwise would be rather
- foolish.
-
-Once all of these slonik scripts have been run, replication may be
-expected to continue to run as long as slons stay running.
-
-If you have an outage, where a database server or a server hosting
-slon processes falls over, and it's not so serious that a database
-gets mangled, then no big deal: Just restart the postmaster and
-restart slon processes, and replication should pick up.
-
-If something does get mangled, then actions get more complicated:
-
-1 - If the failure was of the "origin" database, then you probably want
- to use FAIL OVER to shift the "master" role to another system.
-
-2 - If a subscriber failed, and other nodes were drawing data from it,
- then you could submit SUBSCRIBE SET requests to point those other
- nodes to some node that is "less mangled." That is not a real big
- deal; note that this does NOT require that they get re-subscribed
- from scratch; they can pick up (hopefully) whatever data they
- missed and simply catch up by using a different data source.
-
-Once you have reacted to the loss by reconfiguring the surviving nodes
-to satisfy your needs, you may want to recreate the mangled node. See
-the Slony-I Administrative Guide for more details on how to do that.
-It is not overly profound; you need to drop out the mangled node, and
-recreate it anew, which is not all that different from setting up
-another subscriber.

0 comments on commit 9ba2f13

Please sign in to comment.
Something went wrong with that request. Please try again.