Skip to content
Find file
Fetching contributors…
Cannot retrieve contributors at this time
82 lines (62 sloc) 5.26 KB
layout: post
title: CouchDB Load Balancing and Replication using HAProxy.
published: true
_edit_last: "1"
- Dev
- Erlang
- Linux
type: post
status: publish
Last night, I decided to dig into <a href="">CouchDB</a> a bit more than I have in the past and setup a simple load balanced and replicated setup using <a href="">HAProxy</a>. In the end it was a pretty easy feat and seems to work fairly well. Here's what I had to do.
First, I setup three instances of CouchDB on the same machine using different configuration files, PIDs and loopback addresses for each. This can certainly be exchanged for three different machines. Running them on the same machine make sure you adjust the DbRootDir, BindAddress, LogFile in the configuration file and use a command like the following to start things up. This will make sure the non-default configuration and PID location are used.
./couchdb -c SOME_PATH/couchdb2.ini -p SOME_PATH/
As you may already know CouchDB has a nice web interface called futon, http://HOSTNAME:5984/_utils/ Using futon I created a database with the same name on all three. I then chose which instance would be my "master", couchdb1 and couchdb2 and 3 will be "slaves". I put master and slave in quotes because there isn't this type of relationship in CouchDB as far as I can tell. All instances can replicate to each other as long as they can connect to each other, so master-slave replication is simply the type of configuration I am enforcing with HAProxy and my replication POST commands. More on these bits later. I then created created a document on my master node and using futon's replicator replicated the changes to the other nodes. I then wanted to find a way to automate or schedule this. You can <a href="">initiate replication simply by sending a POST request</a> to couchdb so I wrote a simple curl script to do just that.
First I created the replication POST body in a file:
When run against the master this will replicate the master to couchdb2. I wrote a similar file for couchdb3 as well.
Then using curl I can send this body to the master:
curl -X POST --data @couchdb1_2_rep http://couchdb1:5984/_replicate
curl -X POST --data @couchdb1_3_rep http://couchdb1:5984/_replicate
After running you should see some output that starts with <em>{"ok":true,"session_id ...}</em> this means things went well. You should also see some output in the logs on both instances. These commands can be put in a cron to run a specific intervals to keep the slaves updated. You can also create a script and configure <a href="">DbUpdateNotificationProcess</a> to replicate after each update. The later is probably a nicer solution but a cron and curl should get you started.
I then moved on to setting up HAProxy to load balance between the nodes. Since I wanted a master-slave relationship between the nodes I needed to set HAProxy to only send POSTs, PUTs and DELETEs to the master and GET requests to the two slaves. After checking the <a href="">docs</a> and playing with a couple different ACL configurations I didn't find a solution. I then contacted the mailing list for some advice and conveniently a solution was sent back to me quickly. They also told me about another piece of <a href="">documentation</a> I didn't find initially. My configuration for HAProxy is pretty basic but it shows what needs to be done.
maxconn 4096
nbproc 2
mode http
clitimeout 150000
srvtimeout 30000
contimeout 4000
balance roundrobin
stats enable
stats uri /haproxy?stats
frontend couchdb_lb
bind localhost:8080
acl master_methods method POST DELETE PUT
use_backend master_backend if master_methods
default_backend slave_backend
backend master_backend
server couchdb1 couchdb1:5984 weight 1 maxconn 512 check
backend slave_backend
server couchdb2 couchdb2:5984 weight 1 maxconn 512 check
server couchdb3 couchdb3:5984 weight 1 maxconn 512 check
The part that enforces where the PUTs, DELETEs and POSTs go is the ACL definition and it basically says that if HAProxy receives a POST, DELETE or PUT then use the master node otherwise use a slave.
Once done I started up HAProxy and tested it out and found that it worked out nicely with GETs going to the slaves in roundrobin fashion and PUTs, DELETEs and POSTs going to the master. I then made a slight change to my curl command from earlier to have the replication POSTs go through HAProxy just to make sure.
curl -X POST --data @couchdb1_2_rep http://localhost:8080/_replicate
curl -X POST --data @couchdb1_3_rep http://localhost:8080/_replicate
If things are working properly you should find that the replication POST commands only go to the master node and the GET commands got to the two slaves.
CouchDB is pretty easy to get going and fun to work with. Hopefully this will help you get going.
Something went wrong with that request. Please try again.