Skip to content
Permalink
Browse files
Improve snap creation (#38)
* Added --edge to installation
* Changed couchdb.ini from local.d to default.d
* Switched config order to standard: default.ini, default.d, local.ini, local.d
* couchdb.ini has been moved from local.d to default.d
* Re-wrote configuration section to reflect standard order.
* Added a 90-override.ini file to ensure HTTP changes go in the last file
* Pared back the list of settable options to bare min
* Emphasized the q=1 parameter
* On fresh installation, copy the local.ini from the rel directory
* Add sequence number to couchdb.ini
* snap set now only configures vm.args; updated HOWTO to use HTTP configure
  • Loading branch information
sklassen authored and wohali committed Nov 30, 2018
1 parent eeba118 commit 04a78741715a0ebbfa4a967e7a82bf599b86de33
Showing 7 changed files with 176 additions and 277 deletions.
@@ -0,0 +1,30 @@
# Building snaps

## Prerequisites

CouchDB requires Ubuntu 16.04. If building on 18.04, then LXD might be useful.

1. `lxc launch ubuntu:16.04 couchdb-pkg`
1. `lxc exec couchdb-pkg bash`
1. `sudo apt update`
1. `sudo apt install snapd snapcraft`

1. `git clone https://github.com/couchdb/couchdb-pkg.git`
1. `cd couchdb-pkg`

## How to do it

1. Edit `snap/snapcraft.yaml` to point to the correct tag (e.g. `2.2.0`)
1. `snapcraft`

## Instalation

You may need to pull the LXD file to the host system.

$ lxc file pull couchdb-pkg/root/couchdb-pkg/couchdb_2.2.0_amd64.snap /tmp/couchdb_2.2.0_amd64.snap

The self crafted snap will need to be installed in devmode

$ sudo snap install /tmp/couchdb_2.2.0_amd64.snap --devmode


@@ -1,109 +1,112 @@
# HOW TO install a cluster using snap

# Create three machines

In the instruction below, we are going to set up a three -- the miniumn number needed to gain performace improvement -- Couch cluster database. In this potted example we will be using LXD.

We launch a new container and install couchdb on one machine

1. localhost> `lxc launch ubuntu:18.04 couchdb-c1`
1. localhost> `lxc exec couchdb-c1 bash`
1. couchdb-c1> `apt update`
1. couchdb-c1> `snap install couchdb`
1. couchdb-c1> `logout`

Here we use LXD copy function to speed up the test
```
lxc copy couchdb-c1 couchdb-c2
lxc copy couchdb-c1 couchdb-c3
lxc copy couchdb-c1 cdb-backup
lxc start couchdb-c2
lxc start couchdb-c3
lxc start cdb-backup
```

# Configure CouchDB (using the snap tool)

We are going to need the IP addresses. You can find them here.
```
lxc list
```

Now lets use the snap configuration tool to set the configuration files.
```
lxc exec couchdb-c1 snap set couchdb name=couchdb@10.210.199.199 setcookie=monster admin=password bind-address=0.0.0.0
lxc exec couchdb-c2 snap set couchdb name=couchdb@10.210.199.254 setcookie=monster admin=password bind-address=0.0.0.0
lxc exec couchdb-c3 snap set couchdb name=couchdb@10.210.199.24 setcookie=monster admin=password bind-address=0.0.0.0
```
The backup machine we will leave as a single instance and no sharding.
```
lxc exec cdb-backup snap set couchdb name=couchdb@127.0.0.1 setcookie=monster admin=password bind-address=0.0.0.0 n=1 q=1
```

The snap must be restarted for the new configurations to take affect.
```
lxc exec couchdb-c1 snap restart couchdb
lxc exec couchdb-c2 snap restart couchdb
lxc exec couchdb-c3 snap restart couchdb
lxc exec cdb-backup snap restart couchdb
## Create three nodes

In the example below, we are going to set up a three node CouchDB cluster. (Three is the minimum number needed to support clustering features.) We'll also set up a separate, single machine for making backups. In this example we will be using LXD.

We launch a (single) new container, install couchdb via snap from the store and enable interfaces, open up the bind address and set a admin password.
```bash
1. localhost> lxc launch ubuntu:18.04 couchdb-c1
1. localhost> lxc exec couchdb-c1 bash
1. couchdb-c1> apt update
1. couchdb-c1> snap install couchdb --edge
1. couchdb-c1> snap connect couchdb:mount-observe
1. couchdb-c1> snap connect couchdb:process-control
1. couchdb-c1> curl -X PUT http://localhost:5984/_node/_local/_config/httpd/bind_address -d '"0.0.0.0"'
1. couchdb-c1> curl -X PUT http://localhost:5984/_node/_local/_config/admins/admin -d '"Be1stDB"'
1. couchdb-c1> exit
```
Back on localhost, we can then use the LXD copy function to speed up installation:
```bash
$ lxc copy couchdb-c1 couchdb-c2
$ lxc copy couchdb-c1 couchdb-c3
$ lxc copy couchdb-c1 couchdb-bkup
$ lxc start couchdb-c2
$ lxc start couchdb-c3
$ lxc start couchdb-bkup
```

## Configure CouchDB using the snap tool

We are going to need the IP addresses:
```bash
$ lxc list
```
Now, again from localhost, and using the `lxc exec` commond, we will use the snap configuration tool to set the
various configuration files.
```bash
$ lxc exec couchdb-c1 snap set couchdb name=couchdb@10.210.199.73 setcookie=monster
$ lxc exec couchdb-c2 snap set couchdb name=couchdb@10.210.199.221 setcookie=monster
$ lxc exec couchdb-c3 snap set couchdb name=couchdb@10.210.199.121 setcookie=monster
```
The backup machine we will configure as a single instance (n=1).
```bash
$ lxc exec couchdb-bkup snap set couchdb name=couchdb@127.0.0.1 setcookie=monster
$ lxc exec couchdb-bkup -- curl -X PUT http://admin:Be1stDB@localhost:5984/_node/_local/_config/cluster/n -d '"1"'
$ lxc exec couchdb-bkup -- curl -X PUT http://admin:Be1stDB@localhost:5984/_node/_local/_config/cluster/q -d '"1"'
```
Each snap must be restarted for the new configurations to take affect.
```bash
$ lxc exec couchdb-c1 snap restart couchdb
$ lxc exec couchdb-c2 snap restart couchdb
$ lxc exec couchdb-c3 snap restart couchdb
$ lxc exec couchdb-bkup snap restart couchdb
```
The configuration files are stored here.
```
lxc exec cdb-backup cat /var/snap/couchdb/current/etc/vm.args
lxc exec cdb-backup cat /var/snap/couchdb/current/etc/local.d/*
```bash
$ lxc exec couchdb-bkup cat /var/snap/couchdb/current/etc/vm.args
```
Any changes to couchdb from the http configutation tool are made here
```
lxc exec cdb-backup cat /var/snap/couchdb/current/etc/local.d/local.ini
```bash
$ lxc exec couchdb-bkup cat /var/snap/couchdb/current/etc/local.ini
```

# Configure CouchDB Cluster (using the http interface)
## Configure CouchDB Cluster (using the http interface)

Now we set up the cluster via the http front-end. This only needs to be run once on the first machine. The last command syncs with the other nodes and creates the standard databases.
```
curl -X POST -H "Content-Type: application/json" http://admin:password@10.210.199.199:5984/_cluster_setup -d '{"action": "add_node", "host":"10.210.199.254", "port": "5984", "username": "admin", "password":"password"}'
curl -X POST -H "Content-Type: application/json" http://admin:password@10.210.199.199:5984/_cluster_setup -d '{"action": "add_node", "host":"10.210.199.24", "port": "5984", "username": "admin", "password":"password"}'
curl -X POST -H "Content-Type: application/json" http://admin:password@10.210.199.199:5984/_cluster_setup -d '{"action": "finish_cluster"}'
Now we set up the cluster via the http front-end. This only needs to be run once on the first machine. The last command
syncs with the other nodes and creates the standard databases.
```bash
$ curl -X POST -H "Content-Type: application/json" http://admin:Be1stDB@10.210.199.73:5984/_cluster_setup -d '{"action": "add_node", "host":"10.210.199.221", "port": "5984", "username": "admin", "password":"Be1stDB"}'
$ curl -X POST -H "Content-Type: application/json" http://admin:Be1stDB@10.210.199.73:5984/_cluster_setup -d '{"action": "add_node", "host":"10.210.199.121", "port": "5984", "username": "admin", "password":"Be1stDB"}'
$ curl -X POST -H "Content-Type: application/json" http://admin:Be1stDB@10.210.199.73:5984/_cluster_setup -d '{"action": "finish_cluster"}'
```
Now we have a functioning three node cluster.

# An Example Database
## An Example Database

Let's create an example database ...
```bash
$ curl -X PUT http://admin:Be1stDB@10.210.199.73:5984/example
$ curl -X PUT http://admin:Be1stDB@10.210.199.73:5984/example/aaa -d '{"test":1}' -H "Content-Type: application/json"
$ curl -X PUT http://admin:Be1stDB@10.210.199.73:5984/example/aab -d '{"test":2}' -H "Content-Type: application/json"
$ curl -X PUT http://admin:Be1stDB@10.210.199.73:5984/example/aac -d '{"test":3}' -H "Content-Type: application/json"
```
curl -X PUT http://admin:password@10.210.199.199:5984/example
curl -X PUT http://admin:password@10.210.199.199:5984/example/aaa -d '{"test":1}' -H "Content-Type: application/json"
curl -X PUT http://admin:password@10.210.199.199:5984/example/aab -d '{"test":2}' -H "Content-Type: application/json"
curl -X PUT http://admin:password@10.210.199.199:5984/example/aac -d '{"test":3}' -H "Content-Type: application/json"
```
... And see that it is sync'd accross the three nodes.
```
curl -X GET http://admin:password@10.210.199.199:5984/example/_all_docs
curl -X GET http://admin:password@10.210.199.254:5984/example/_all_docs
curl -X GET http://admin:password@10.210.199.24:5984/example/_all_docs
... And see that it is created on all three nodes.
```bash
$ curl -X GET http://admin:Be1stDB@10.210.199.73:5984/example/_all_docs
$ curl -X GET http://admin:Be1stDB@10.210.199.221:5984/example/_all_docs
$ curl -X GET http://admin:Be1stDB@10.210.199.121:5984/example/_all_docs
```
# Backing Up CouchDB
## Backing Up CouchDB

Our back up server is on 10.210.199.242. We will manually replicate this from one (anyone) of the nodes.
Our backup server is on 10.210.199.242. We will manually replicate to this from one (can be any one) of the nodes.
```bash
$ curl -X POST http://admin:Be1stDB@10.210.199.242:5984/_replicate -d '{"source":"http://10.210.199.73:5984/example", "target":"example", "continuous":false,"create_target":true}' -H "Content-Type: application/json"
$ curl -X GET http://admin:Be1stDB@10.210.199.242:5984/example/_all_docs
```
curl -X POST http://admin:password@10.210.199.242:5984/_replicate -d '{"source":"http://10.210.199.199:5984/example", "target":"example", "continuous":false,"create_target":true}' -H "Content-Type: application/json"
curl -X GET http://admin:password@10.210.199.242:5984/example/_all_docs
Whereas the data store for the clusters nodes is sharded:
```bash
$ lxc exec couchdb-c1 ls /var/snap/couchdb/common/data/shards/
```
The data store for the clusters nodes are sharded
```
lxc exec couchdb-c1 ls /var/snap/couchdb/common/2.x/data/shards/
```

The backup database is a single file.
```
lxc exec cdb-backup ls /var/snap/couchdb/common/2.x/data/shards/00000000-ffffffff/
The backup database is a single directory:
```bash
$ lxc exec couchdb-bkup ls /var/snap/couchdb/common/data/shards/
```

# Monitoring CouchDB

The logs, by default, are captured by journald
```
lxc exec couchdb-c1 bash
journalctl -u snap.couchdb -f
```
## Monitoring CouchDB

The logs, by default, are captured by journald. First connect to the node in question:
`$ lxc exec couchdb-c1 bash`
Then, show logs as usual. couchdb is likely prefixed with 'snap' and suffix may vary depending on the version of snap.
`$ journalctl -u snap.couchdb* -f`
@@ -1,61 +1,70 @@
# Building snaps
# Snap Instalation

## Prerequisites
## Downloading from the snap store

CouchDB requires Ubuntu 16.04. If building on 18.04, then LXD might be useful.
The snap can be installed from a file or directly from the snap store. It is, for the moment, listed in the edge channel.

1. `lxc launch ubuntu:16.04 couchdb-pkg`
1. `lxc exec couchdb-pkg bash`
1. `sudo apt update`
1. `sudo apt install snapd snapcraft`
```
$ sudo snap install couchdb --edge
```
## Enable snap permissions

1. `git clone https://github.com/couchdb/couchdb-pkg.git`
1. `cd couchdb-pkg`
The snap installation uses AppArmor to protect your system. CouchDB requests access to two interfaces: mount-observe, which
is used by the disk compactor to know when to initiate a cleanup; and process-control, which is used by the indexer to set
the priority of couchjs to 'nice'. These two interfaces, while not required, are useful. If they are not enabled, CouchDB will
still run, but you will need to run the compactor manually and couchjs may put a heavy load on the system when indexing.

## How to do it
To connect the interfaces type:
```
$ sudo snap connect couchdb:mount-observe
$ sudo snap connect couchdb:process-control
```
## Snap configuration

1. Edit `snap/snapcraft.yaml` to point to the correct tag (e.g. `2.2.0`)
1. `snapcraft`
There are two levels of hierarchy within couchdb configuration.

# Snap Instalation
The default layer is stored in /snap/couchdb/current/rel/couchdb/etc/ the default.ini is
first consulted and then any file default.d directory. In the snap installation
this is mounted read-only.

You may need to pull the LXD file to the host system.
The local layer is stored in /var/snap/couchdb/current/etc/ on the writable /var mount.
Within this second layer, configurations are set with-in local.ini or superseded by any
file within local.d. Configuration management tools (like puppet, chef, ansible, salt) operate here.

$ lxc file pull couchdb-pkg/root/couchdb-pkg/couchdb_2.2.0_amd64.snap /tmp/couchdb_2.2.0_amd64.snap
The name of the erlang process and the security cookie used is set within vm.args file.
This can be set suing the snap native configuration. For example, when setting up
a cluster over several machines the convention is to set the erlang name to couchdb@your.ip.address.

The self crafted snap will need to be installed in devmode
```
$ sudo snap set couchdb name=couchdb@216.3.128.12 setcookie=cutter
```

$ sudo snap install /tmp/couchdb_2.2.0_amd64.snap --devmode
Snap Native Configuration changes only come into effect after a restart

# Snap Configuration
```
$ sudo snap restart couchdb
```

There are two levels of erlang and couchdb configuration hierarchy.
CouchDB options can be set via configuration over HTTP, as below.

The default layer is stored in /snap/couchdb/current/rel/couchdb/etc/ and is read only.
The user override layer, is stored in /var/snap/couchdb/current/etc/ and is writable.
Within this second layer, configurations are set with the local.d directory (one file
per section) or the local.ini (co-mingled). The "snap set" command works with the
former (local.d) and couchdb http configuration overwrites the latter (local.ini).
Entries in local.ini supersede those in the local.d directory.
```
$ curl -X PUT http://localhost:5984/_node/_local/_config/httpd/bind_address -d '"0.0.0.0"'
$ curl -X PUT http://localhost:5984/_node/_local/_config/couchdb/delayed-commits -d '"true"'
```

The name of the erlang process and the security cookie used is set in vm.args file.
This can be set through the snap native configuration. For example, when setting up
a cluster over several machines the convention is to set the erlang
name to couchdb@your.ip.address. Both erlang and couchdb configuration changes can be
made at the same time.
Changes here do not require a restart.

$ sudo snap set couchdb name=couchdb@216.3.128.12 setcookie=cutter admin=Be1stDB bind-address=0.0.0.0
For anything else in vm.args or configuration not white listed over http, you can edit
the /var/snap/couchdb/current/etc files by hand and restart CouchDB.

Snap set variable can not contain underscore character, but any dashes are converted to underscore when
writing to file. Wrap double quotes around any bracets and avoid spaces.
## Example Cluster

$ sudo snap set couchdb delayed-commits=true erlang="{couch_native_process,start_link,[]}"
See the [HOWTO][1] file to see an example of a three node cluster and further notes.

Snap Native Configuration changes only come into effect after a restart

$ sudo snap restart couchdb
## Building a Private Snap

# Example Cluster
If you want to build your own snap file from source see the [BUILD][2] for instructions.

See the HOWTO.md file to see an example of a three node cluster.
[1]: HOWTO.md
[2]: BUILD.md

0 comments on commit 04a7874

Please sign in to comment.