Skip to content
Permalink
Browse files
[snap] smaller snap, improve cluster examples (#64)
* Added configure hook 'snap set couchdb admin=[password]'
* Added port to the list of snap configured parameters
* Split packages into build and stage to reduce snap size
* Some formatting cleanups
* Rewrote cluster HOWTO with new snap functionality without using LXC

Co-authored-by: Joan Touzet <wohali@apache.org>
Co-authored-by: Simon Klassen <>
  • Loading branch information
sklassen and wohali committed Mar 26, 2020
1 parent 43199b3 commit 79408d595e74470e7ec3caaee412449fc16b4ea0
Showing 3 changed files with 121 additions and 110 deletions.
@@ -109,146 +109,148 @@ You can set up a snap-based cluster on your desktop in no time using the couchdb

In the example below, we are going to set up a three node CouchDB cluster. (Three is the
minimum number needed to support clustering features.) We'll also set up a separate,
single machine for making backups. In this example we will be using LXD.

We launch a (single) new container, install couchdb via snap from the store and enable
interfaces, open up the bind address and set a admin password.
single machine for making backups. In this example we will be using parallel instance of
snaps that is availble from version 2.36.

First we need to enable parallel instances of snap.
```bash
localhost> lxc launch ubuntu:18.04 couchdb-c1
localhost> lxc exec couchdb-c1 bash
couchdb-c1> apt update
couchdb-c1> snap install couchdb --edge
couchdb-c1> snap connect couchdb:mount-observe
couchdb-c1> snap connect couchdb:process-control
couchdb-c1> curl -X PUT http://localhost:5984/_node/_local/_config/chttpd/bind_address -d '"0.0.0.0"'
couchdb-c1> curl -X PUT http://localhost:5984/_node/_local/_config/admins/admin -d '"Be1stDB"'
couchdb-c1> exit
$ snap set system experimental.parallel-instances=true
```

Back on localhost, we can then use the LXD copy function to speed up installation:

We install couchdb via snap from the store and enable interfaces, open up the bind address
and set a admin password.
```bash
$ lxc copy couchdb-c1 couchdb-c2
$ lxc copy couchdb-c1 couchdb-c3
$ lxc copy couchdb-c1 couchdb-bkup
$ lxc start couchdb-c2
$ lxc start couchdb-c3
$ lxc start couchdb-bkup
$> snap install couchdb_1
$> snap connect couchdb_1:mount-observe
$> snap connect couchdb_1:process-control
$> snap set couchdb_1 name=couchdb1@127.0.0.1 setcookie=cutter port=5981 admin=Be1stDB
```
You will need to edit the local configuration file to manually set the data directories.
You can find the local.ini at ```/var/snap/couchdb_1/current/etc/local.ini``` ensure
that the ```[couchdb]``` stanza should look like this
```
[couchdb]
;max_document_size = 4294967296 ; bytes
;os_process_timeout = 5000
database_dir = /var/snap/couchdb_1/common/data
view_index_dir = /var/snap/couchdb_1/common/data
```
Start your engine ... and confirm that couchdb is running.
```bash
$> snap start couchdb_1
## Configure CouchDB using the snap tool

We are going to need the IP addresses of each container:

$> curl -X GET http://localhost:5981
```
Then repeat for couchdb_1, couchdb_2 and couchdb_bkup, editing the local.ini and changing
the name, port number for each. They should all have the same admin password and cookie.
```bash
$ lxc list
$> snap install couchdb_2
$> snap connect couchdb_2:mount-observe
$> snap connect couchdb_2:process-control
$> snap set couchdb_2 name=couchdb2@127.0.0.1 setcookie=cutter port=5982 admin=Be1stDB
$> snap install couchdb_3
$> snap connect couchdb_3:mount-observe
$> snap connect couchdb_3:process-control
$> snap set couchdb_3 name=couchdb3@127.0.0.1 setcookie=cutter port=5983 admin=Be1stDB
```

For this example, let's say the IP addresses are `10.210.199.10`, `.11` and `.12`.

Now, again from localhost, and using the `lxc exec` commond, we will use the snap
configuration tool to set the various configuration files.
## Enable CouchDB Cluster (using the http interface)

Have the first node generate two uuids
```bash
$ lxc exec couchdb-c1 snap set couchdb name=couchdb@10.210.199.10 setcookie=monster
$ lxc exec couchdb-c2 snap set couchdb name=couchdb@10.210.199.11 setcookie=monster
$ lxc exec couchdb-c3 snap set couchdb name=couchdb@10.210.199.12 setcookie=monster
$> curl http://localhost:5981/_uuids?count=2
```

The backup machine we will configure as a single instance (`n=1, q=1`).
The each instances within a cluster needs to share the same uuid ...

```bash
$ lxc exec couchdb-bkup snap set couchdb name=couchdb@127.0.0.1 setcookie=monster
$ lxc exec couchdb-bkup -- curl -X PUT http://admin:Be1stDB@localhost:5984/_node/_local/_config/cluster/n -d '"1"'
$ lxc exec couchdb-bkup -- curl -X PUT http://admin:Be1stDB@localhost:5984/_node/_local/_config/cluster/q -d '"1"'
curl -X PUT http://admin:Be1stDB@127.0.0.1:5981/_node/_local/_config/couchdb/uuid -d '"f6f22e2c664b49ba2c6dc88379002548"'
curl -X PUT http://admin:Be1stDB@127.0.0.1:5982/_node/_local/_config/couchdb/uuid -d '"f6f22e2c664b49ba2c6dc88379002548"'
curl -X PUT http://admin:Be1stDB@127.0.0.1:5983/_node/_local/_config/couchdb/uuid -d '"f6f22e2c664b49ba2c6dc88379002548"'
```

Each snap must be restarted for the new configurations to take affect.
... and a (different) but common secret ...

```bash
$ lxc exec couchdb-c1 snap restart couchdb
$ lxc exec couchdb-c2 snap restart couchdb
$ lxc exec couchdb-c3 snap restart couchdb
$ lxc exec couchdb-bkup snap restart couchdb
curl -X PUT http://admin:Be1stDB@127.0.0.1:5981/_node/_local/_config/couch_httpd_auth/secret -d '"f6f22e2c664b49ba2c6dc88379002a80"'
curl -X PUT http://admin:Be1stDB@127.0.0.1:5982/_node/_local/_config/couch_httpd_auth/secret -d '"f6f22e2c664b49ba2c6dc88379002a80"'
curl -X PUT http://admin:Be1stDB@127.0.0.1:5983/_node/_local/_config/couch_httpd_auth/secret -d '"f6f22e2c664b49ba2c6dc88379002a80"'
```

The configuration files are stored here:

... after which they can be enabled for clustering
```bash
$ lxc exec couchdb-bkup cat /var/snap/couchdb/current/etc/vm.args
curl -X POST -H "Content-Type: application/json" http://admin:Be1stDB@127.0.0.1:5981/_cluster_setup -d '{"action": "enable_cluster", "bind_address":"0.0.0.0", "username": "admin", "password":"Be1stDB", "node_count":"3"}'
curl -X POST -H "Content-Type: application/json" http://admin:Be1stDB@127.0.0.1:5982/_cluster_setup -d '{"action": "enable_cluster", "bind_address":"0.0.0.0", "username": "admin", "password":"Be1stDB", "node_count":"3"}'
curl -X POST -H "Content-Type: application/json" http://admin:Be1stDB@127.0.0.1:5983/_cluster_setup -d '{"action": "enable_cluster", "bind_address":"0.0.0.0", "username": "admin", "password":"Be1stDB", "node_count":"3"}'
```

Any changes to couchdb via curl are stored here:

You can check the status here.
```bash
$ lxc exec couchdb-bkup cat /var/snap/couchdb/current/etc/local.ini
curl http://admin:Be1stDB@127.0.0.1:5981/_cluster_setup
curl http://admin:Be1stDB@127.0.0.1:5982/_cluster_setup
curl http://admin:Be1stDB@127.0.0.1:5983/_cluster_setup
```

## Configure CouchDB Cluster (using the http interface)
Next we want to join the three nodes together. We do this through requests to the first node.
```bash
curl -X PUT "http://admin:Be1stDB@127.0.0.1:5981/_node/_local/_nodes/couchdb2@127.0.0.1" -d '{"port":5982}'
curl -X PUT "http://admin:Be1stDB@127.0.0.1:5981/_node/_local/_nodes/couchdb3@127.0.0.1" -d '{"port":5983}'
Now we set up the cluster via the http front-end. This only needs to be run once on the
first machine. The last command syncs with the other nodes and creates the standard
databases.
curl -X POST -H "Content-Type: application/json" http://admin:Be1stDB@127.0.0.1:5981/_cluster_setup -d '{"action": "finish_cluster"}'
curl http://admin:Be1stDB@127.0.0.1:5981/_cluster_setup
```
If everthing as been successful, then the three notes can be seen here.
```bash
$ curl -X POST -H "Content-Type: application/json" \
http://admin:Be1stDB@10.210.199.10:5984/_cluster_setup \
-d '{"action": "add_node", "host":"10.210.199.11", "port": "5984", "username": "admin", "password":"Be1stDB"}'
$ curl -X POST -H "Content-Type: application/json" \
http://admin:Be1stDB@10.210.199.10:5984/_cluster_setup \
-d '{"action": "add_node", "host":"10.210.199.12", "port": "5984", "username": "admin", "password":"Be1stDB"}'
$ curl -X POST -H "Content-Type: application/json" \
http://admin:Be1stDB@10.210.199.10:5984/_cluster_setup \
-d '{"action": "finish_cluster"}'
$> curl -X GET "http://admin:Be1stDB@127.0.0.1:5981/_membership"
```

Now we have a functioning three node cluster.
Now we have a functioning three node cluster. Next we will test it.

## An Example Database

Let's create an example database ...

```bash
$ curl -X PUT http://admin:Be1stDB@10.210.199.10:5984/example
$ curl -X PUT http://admin:Be1stDB@10.210.199.10:5984/example/aaa -d '{"test":1}' -H "Content-Type: application/json"
$ curl -X PUT http://admin:Be1stDB@10.210.199.10:5984/example/aab -d '{"test":2}' -H "Content-Type: application/json"
$ curl -X PUT http://admin:Be1stDB@10.210.199.10:5984/example/aac -d '{"test":3}' -H "Content-Type: application/json"
$ curl -X PUT http://admin:Be1stDB@localhost:5981/example
$ curl -X PUT http://admin:Be1stDB@localhost:5981/example/aaa -d '{"test":1}' -H "Content-Type: application/json"
$ curl -X PUT http://admin:Be1stDB@localhost:5981/example/aab -d '{"test":2}' -H "Content-Type: application/json"
$ curl -X PUT http://admin:Be1stDB@localhost:5981/example/aac -d '{"test":3}' -H "Content-Type: application/json"
```

... and verify that it is created on all three nodes:

... and verify that it is created on all three nodes ...
```bash
$ curl -X GET http://admin:Be1stDB@10.210.199.10:5984/example/_all_docs
$ curl -X GET http://admin:Be1stDB@10.210.199.11:5984/example/_all_docs
$ curl -X GET http://admin:Be1stDB@10.210.199.12:5984/example/_all_docs
$ curl -X GET http://localhost:5981/example/_all_docs
$ curl -X GET http://localhost:5982/example/_all_docs
$ curl -X GET http://localhost:5983/example/_all_docs
```
... and is separated into shards on the disk.
```bash
$ ls /var/snap/couchdb_?/common/data/shards/
```

## Backing Up CouchDB

Our backup server is on 10.210.199.242. We will manually replicate to this from one (can be any one) of the nodes.

The backup machine we will configure as a single instance (`n=1, q=1`).
```bash
$ curl -X POST http://admin:Be1stDB@10.210.199.242:5984/_replicate \
-d '{"source":"http://10.210.199.10:5984/example","target":"example","continuous":false,"create_target":true}' \
-H "Content-Type: application/json"
$ curl -X GET http://admin:Be1stDB@10.210.199.242:5984/example/_all_docs
$> snap install couchdb_bkup
$> snap set couchdb_bkup name=couchdb0@localhost setcookie=cutter port=5980 admin=Be1stDB
$> curl -X PUT http://admin:Be1stDB@localhost:5980/_node/_local/_config/cluster/n -d '"1"'
$> curl -X PUT http://admin:Be1stDB@localhost:5980/_node/_local/_config/cluster/q -d '"1"'
```

Whereas the data store for the clusters nodes is sharded:

We will manually replicate to this from one (can be any one) of the nodes.
```bash
$ lxc exec couchdb-c1 ls /var/snap/couchdb/common/data/shards/
$ curl -X POST http://admin:Be1stDB@localhost:5980/_replicate \
-d '{"source":"http://localhost:5981/example","target":"example","continuous":false,"create_target":true}' \
-H "Content-Type: application/json"
$ curl -X GET http://admin:Be1stDB@localhost:5980/example/_all_docs
```

The backup database is a single directory:

The backup database has a single shard and single directory:
```bash
$ lxc exec couchdb-bkup ls /var/snap/couchdb/common/data/shards/
$ ls /var/snap/couchdb_bkup/common/data/shards/
```

-----

# Remote Shell into CouchDB

In the very rare case you need to connect to the couchdb server, a remsh script is
provided. You need to specify both the name of the server and the cookie, even if
you are using the default.
```bash
/snap/bin/couchdb.remsh -n couchdb@localhost -c monster
```
# Building this snap <a name="building"></a>

This build requires Ubuntu 18.04, the `core18` core, and the `snapcraft` tool. The
@@ -30,7 +30,7 @@ _modify_vm_args() {
fi
}

## add or replace for the vm.arg file
## add or replace for the local.ini file
_modify_ini_args() {
opt=$1
value="$2"
@@ -57,7 +57,7 @@ do
fi
done

LOCAL_INI_OPTIONS="admin"
LOCAL_INI_OPTIONS="admin port"
for key in $LOCAL_INI_OPTIONS
do
val=$(snapctl get $key)
@@ -5,7 +5,7 @@ description: |
CouchDB is a database that completely embraces the web. Store your data with
JSON documents. Access your documents and query your indexes with your web
browser, via HTTP. Index, combine, and transform your documents with
JavaScript.
JavaScript.
architectures:
- build-on: amd64
@@ -21,16 +21,20 @@ parts:
override-pull: |
apt-get update
apt-get upgrade -yy
apt-get install -y --no-install-recommends apt-transport-https gnupg ca-certificates
echo "deb https://apache.bintray.com/couchdb-deb bionic main" | tee /etc/apt/sources.list.d/custom.list
apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 8756C4F765C9AC3CB6B85D62379CE192D401AB61
apt-get install -y --no-install-recommends apt-transport-https \
gnupg ca-certificates
echo "deb https://apache.bintray.com/couchdb-deb bionic main" | \
tee /etc/apt/sources.list.d/custom.list
apt-key adv --keyserver keyserver.ubuntu.com --recv-keys \
8756C4F765C9AC3CB6B85D62379CE192D401AB61
apt-get update
couchdb:
couchdb:
after: [add-repo]
plugin: dump
source: https://apache.bintray.com/couchdb-deb/pool/C/CouchDB/couchdb_3.0.0~bionic_amd64.deb
source-type: deb
# because this doesn't use apt, we have to manually list all of our dependencies :(
# because this doesn't use apt, we have to manually list all of our
# dependencies :(
# the following are all in core18, and warning output can safely be ignored:
# lib/x86_64-linux-gnu/libbz2.so.1.0
# lib/x86_64-linux-gnu/libc.so.6
@@ -53,17 +57,18 @@ parts:
# usr/lib/x86_64-linux-gnu/liblz4.so.1
# usr/lib/x86_64-linux-gnu/libpanelw.so.5
# usr/lib/x86_64-linux-gnu/libstdc++.so.6
stage-packages:
- ca-certificates
build-packages:
- adduser
- curl
- debconf
- ca-certificates
- init-system-helpers
- couch-libmozjs185-1.0
- lsb-base
- curl
- libgcc1
stage-packages:
- couch-libmozjs185-1.0
- procps
- libcurl4
- libgcc1
- libicu60
- libssl1.0.0
- libtinfo5
@@ -79,6 +84,8 @@ parts:
layout:
# Database and log files are common across upgrades
# We do not bind default.ini or default.d/ as these are
# intended to be immutable
$SNAP/opt/couchdb/data:
bind: $SNAP_COMMON/data
$SNAP/opt/couchdb/var/log:
@@ -90,11 +97,13 @@ layout:
bind: $SNAP_DATA/etc/local.d
$SNAP/opt/couchdb/etc/local.ini:
bind-file: $SNAP_DATA/etc/local.ini
# We do not bind default.ini or default.d/ as these are intended to be immutable


environment:
COUCHDB_ARGS_FILE: ${SNAP_DATA}/etc/vm.args
ERL_FLAGS: "-couch_ini ${SNAP}/opt/couchdb/etc/default.ini ${SNAP}/opt/couchdb/etc/default.d ${SNAP_DATA}/etc/local.ini ${SNAP_DATA}/etc/local.d"
ERL_FLAGS: "-couch_ini ${SNAP}/opt/couchdb/etc/default.ini
${SNAP}/opt/couchdb/etc/default.d
${SNAP_DATA}/etc/local.ini
${SNAP_DATA}/etc/local.d"

apps:
couchdb:

0 comments on commit 79408d5

Please sign in to comment.