forked from pgq/skytools-legacy
-
Notifications
You must be signed in to change notification settings - Fork 17
/
londiste3_simple_rep_howto.txt
212 lines (150 loc) · 6.27 KB
/
londiste3_simple_rep_howto.txt
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
= Setting up simple Londiste3 replication =
== Introduction ==
This sample does the following actions:
* sets up the databases
- creates a database 'l3simple', on host which will be master
- populates this with pgbench schema and data
- adds primary and foreign keys to make the db more realistic
- makes a copy of the database on another node to be used as slave
* sets up replication from the master to slave database
- creates the root node on the master node
- creates a leaf node on the slave node
- starts the ticker daemon on the master node
- adds all tables to replication set on both databases
- waits for the replication to complete
It also runs pgbench to test that the replication actually happens and works properly.
== Set up schema for root database ==
=== Create database ===
Run the following SQL:
----
CREATE DATABASE l3simple;
----
=== Set up pgbench schema ===
In this HowTo we are using pgbench for setting up the schema,
populating it with sample data and later running SQL loads to be replicated.
Run command :
----
pgbench -i -s 2 -F 80 l3simple
----
=== And add primary and foreign keys needed for replication ===
Standard pgbench schema lacks Primary Key on the history table.
As Londiste need primary keys we add one. We also add Foreign Keys between tables,
as these may help detect possible replication failures.
create file /tmp/prepare_pgbenchdb_for_londiste.sql with the following ...
----
-- add primary key to history table
ALTER TABLE pgbench_history ADD COLUMN hid SERIAL PRIMARY KEY;
-- add foreign keys
ALTER TABLE pgbench_tellers ADD CONSTRAINT pgbench_tellers_branches_fk FOREIGN KEY(bid) REFERENCES pgbench_branches;
ALTER TABLE pgbench_accounts ADD CONSTRAINT pgbench_accounts_branches_fk FOREIGN KEY(bid) REFERENCES pgbench_branches;
ALTER TABLE pgbench_history ADD CONSTRAINT pgbench_history_branches_fk FOREIGN KEY(bid) REFERENCES pgbench_branches;
ALTER TABLE pgbench_history ADD CONSTRAINT pgbench_history_tellers_fk FOREIGN KEY(tid) REFERENCES pgbench_tellers;
ALTER TABLE pgbench_history ADD CONSTRAINT pgbench_history_accounts_fk FOREIGN KEY(aid) REFERENCES pgbench_accounts;
----
then load it into database:
----
psql l3simple -f /tmp/prepare_pgbenchdb_for_londiste.sql
----
Create and populate target database:
----
psql -d postgres -c "CREATE DATABASE l3simple_db2;"
pg_dump -s l3simple_db1 | psql l3simple_db2
----
Create configuration file st3simple/st3_l3simple_primary.ini
----
[londiste3]
job_name = st3_l3simple_db1
db = dbname=l3simple_db1
queue_name = replika
logfile = st3simple/log/st3_l3simple_db1.log
pidfile = st3simple/pid/st3_l3simple_db1.pid
----
REMINDER: open trust to the leaf host in pg_hba.conf, or add a user to auth as and include in the connstring
Create Londiste root node:
----
londiste3 st3simple/st3_l3simple_primary.ini create-root node1 "dbname=l3simple host=10.199.59.194"
----
Run worker daemon for root node:
----
londiste3 -d st3simple/st3_l3simple_primary.ini worker
----
Create configuration file st3simple/st3_l3simple_leaf.ini
for worker daemon on target node:
----
[londiste3]
job_name = st3_l3simple_leaf
db = dbname=l3simple
queue_name = replika
logfile = st3simple/log/st3_l3simple_leaf.log
pidfile = st3simple/pid/st3_l3simple_leaf.pid
----
Initialize node in target database:
----
londiste3 st3simple/st3_l3simple_leaf.ini create-leaf node2 dbname=l3simple --provider="dbname=l3simple host=10.199.59.194"
----
Launch worker daemon for target database:
----
londiste3 -d st3simple/st3_l3simple_leaf.ini worker
----
Create config file `st3simple/pgqd.ini` for PgQ ticker daemon:
----
[pgqd]
logfile = st3simple/log/pgqd.log
pidfile = st3simple/pid/pgqd.pid
----
Launch ticker daemon:
----
pgqd -d st3simple/pgqd.ini
----
To generate some data traffic on the master database while replicating,
run the following command in background:
----
pgbench -T 120 -c 5 l3simple -f /tmp/throttled.pgbench
----
The /tmp/throttled.pgbench contains the standard pgbench workload, except that
there are random length waits between commands.
Now add all the tables to replication, first on root node and then on the leaf:
Run command :
----
londiste3 st3simple/st3_l3simple_primary.ini add-table --all
londiste3 st3simple/st3_l3simple_leaf.ini add-table --all
----
== Checking and testing ==
To test our newly set up replication
The following command will run pgbench full speed with 5 parallel
database connections generating database traffic for 10 seconds:
----
pgbench -T 10 -c 5 l3simple
----
After this is done, you can check that the tables on both sides have the same data:
----
londiste3 st3simple/st3_l3simple_leaf.ini compare
----
Compare command will establish the same logical point in time on provider and
subscriber nodes and then count and checksum the rows on both sides.
The result will look like this:
----
2011-12-25 08:24:42,138 29189 INFO Locking public.pgbench_accounts
2011-12-25 08:24:42,147 29189 INFO Syncing public.pgbench_accounts
2011-12-25 08:24:45,233 29189 INFO Counting public.pgbench_accounts
2011-12-25 08:24:46,154 29189 INFO srcdb: 200000 rows, checksum=3864719477
2011-12-25 08:24:46,953 29189 INFO dstdb: 200000 rows, checksum=3864719477
2011-12-25 08:24:46,961 29189 INFO Locking public.pgbench_branches
2011-12-25 08:24:46,965 29189 INFO Syncing public.pgbench_branches
2011-12-25 08:24:50,528 29189 INFO Counting public.pgbench_branches
2011-12-25 08:24:50,549 29189 INFO srcdb: 2 rows, checksum=-82377983
2011-12-25 08:24:50,556 29189 INFO dstdb: 2 rows, checksum=-82377983
2011-12-25 08:24:50,568 29189 INFO Locking public.pgbench_history
2011-12-25 08:24:50,572 29189 INFO Syncing public.pgbench_history
2011-12-25 08:24:53,641 29189 INFO Counting public.pgbench_history
2011-12-25 08:24:53,660 29189 INFO srcdb: 1310 rows, checksum=-34927328558
2011-12-25 08:24:53,670 29189 INFO dstdb: 1310 rows, checksum=-34927328558
2011-12-25 08:24:53,675 29189 INFO Locking public.pgbench_tellers
2011-12-25 08:24:53,677 29189 INFO Syncing public.pgbench_tellers
2011-12-25 08:24:56,733 29189 INFO Counting public.pgbench_tellers
2011-12-25 08:24:56,737 29189 INFO srcdb: 20 rows, checksum=518235101
2011-12-25 08:24:56,740 29189 INFO dstdb: 20 rows, checksum=518235101
----
The "checksum" is computed by adding up hashtext() sums for all database rows.
== Done ==
The setup of simple 2 node cluster is done.