Skip to content
Browse files
Add e2e testing
Adds integration testing using Kind (Kubernetes in Docker). The tests
stand up a local Kubernetes cluster and install the current chart
using Helm 2.x.
  • Loading branch information
willholley committed Oct 17, 2019
1 parent d8c73ba commit e366671b0e7825e87761ee8dce9f37aaaedf76fa
Showing 6 changed files with 245 additions and 152 deletions.
@@ -12,16 +12,24 @@


.PHONY: test
.PHONY: lint
@helm lint couchdb

package: test
.PHONY: package
package: lint
@helm package couchdb

.PHONY: package
publish: test
.PHONY: publish
@git checkout gh-pages
@git checkout -b gh-pages-update
@helm repo index docs --url
@git add -i
@echo "To complete the publish step, commit and push the chart tgz and updated index to gh-pages"
@git commit
@echo "To complete the publish step, push the branch to your GitHub remote and create a PR against gh-pages"

# Run end to end tests using KinD
.PHONY: test
@@ -1,167 +1,43 @@
# CouchDB
# CouchDB Helm Charts

Apache CouchDB is a database featuring seamless multi-master sync, that scales
from big data to mobile, with an intuitive HTTP/JSON API and designed for
This repository contains assets related to the CouchDB Helm chart.

This chart deploys a CouchDB cluster as a StatefulSet. It creates a ClusterIP
Service in front of the Deployment for load balancing by default, but can also
be configured to deploy other Service types or an Ingress Controller. The
default persistence mechanism is simply the ephemeral local filesystem, but
production deployments should set `persistentVolume.enabled` to `true` to attach
storage volumes to each Pod in the Deployment.
## Layout

## TL;DR
* `couchdb`: contains the unbundled Helm chart
* `test`: containes scripts to test the chart locally using [Kind][5]

$ helm repo add couchdb
$ helm install couchdb/couchdb --set allowAdminParty=true
## Testing

## Prerequisites
`make test` will run an integration test using [Kind][5]. This stands up a Kubernetes cluster locally and ensures the chart will
deploy using the default options and Helm.

- Kubernetes 1.8+ with Beta APIs enabled
## Releasing

## Installing the Chart
The Helm chart is published to a Helm epository hosted by GitHub pages. This is maintained in the `gh-pages` branch of this repository.

To install the chart with the release name `my-release`:
To publish a new release, perform the following steps:

Add the CouchDB Helm repository:
1. Create a Helm bundle (*.tgz) for the current couchdb chart
2. Switch to the `gh-pages` branch
3. Run `helm repo index docs --url` to generate the Helm repository index
4. `git add` the tgz bundle and the `index.yaml` files. Do not delete the old chart bundles!
5. Commit the changes and create a PR to `gh-pages`.

$ helm repo add couchdb
`make publish` automates these steps for you.

$ helm install --name my-release couchdb/couchdb

This will create a Secret containing the admin credentials for the cluster.
Those credentials can be retrieved as follows:

$ kubectl get secret my-release-couchdb -o go-template='{{ .data.adminPassword }}' | base64 --decode

If you prefer to configure the admin credentials directly you can create a
Secret containing `adminUsername`, `adminPassword` and `cookieAuthSecret` keys:

$ kubectl create secret generic my-release-couchdb --from-literal=adminUsername=foo --from-literal=adminPassword=bar --from-literal=cookieAuthSecret=baz

and then install the chart while overriding the `createAdminSecret` setting:

$ helm install --name my-release --set createAdminSecret=false couchdb/couchdb

This Helm chart deploys CouchDB on the Kubernetes cluster in a default
configuration. The [configuration](#configuration) section lists
the parameters that can be configured during installation.

> **Tip**: List all releases using `helm list`
## Uninstalling the Chart

To uninstall/delete the `my-release` Deployment:

$ helm delete my-release

The command removes all the Kubernetes components associated with the chart and
deletes the release.

## Upgrading an existing Release to a new major version

A major chart version change (like v0.2.3 -> v1.0.0) indicates that there is an
incompatible breaking change needing manual actions.

## Migrating from stable/couchdb

This chart replaces the `stable/couchdb` chart previously hosted by Helm and continues the
version semantics. You can upgrade directly from `stable/couchdb` to this chart using:

$ helm repo add couchdb
$ helm upgrade my-release couchdb/couchdb

## Configuration

The following table lists the most commonly configured parameters of the
CouchDB chart and their default values:

| Parameter | Description | Default |
| `clusterSize` | The initial number of nodes in the CouchDB cluster | 3 |
| `couchdbConfig` | Map allowing override elements of server .ini config | chttpd.bind_address=any |
| `allowAdminParty` | If enabled, start cluster without admin account | false (requires creating a Secret) |
| `createAdminSecret` | If enabled, create an admin account and cookie secret | true |
| `schedulerName` | Name of the k8s scheduler (other than default) | `nil` |
| `erlangFlags` | Map of flags supplied to the underlying Erlang VM | name: couchdb, setcookie: monster
| `persistentVolume.enabled` | Boolean determining whether to attach a PV to each node | false
| `persistentVolume.size` | If enabled, the size of the persistent volume to attach | 10Gi
| `enableSearch` | Adds a sidecar for Lucene-powered text search | false |

A variety of other parameters are also configurable. See the comments in the
`values.yaml` file for further details:

| Parameter | Default |
| `adminUsername` | admin |
| `adminPassword` | auto-generated |
| `cookieAuthSecret` | auto-generated |
| `image.repository` | couchdb |
| `image.tag` | 2.3.1 |
| `image.pullPolicy` | IfNotPresent |
| `searchImage.repository` | kocolosk/couchdb-search |
| `searchImage.tag` | 0.1.0 |
| `searchImage.pullPolicy` | IfNotPresent |
| `initImage.repository` | busybox |
| `initImage.tag` | latest |
| `initImage.pullPolicy` | Always |
| `ingress.enabled` | false |
| `ingress.hosts` | chart-example.local |
| `ingress.annotations` | |
| `ingress.tls` | |
| `persistentVolume.accessModes` | ReadWriteOnce |
| `persistentVolume.storageClass` | Default for the Kube cluster |
| `podManagementPolicy` | Parallel |
| `affinity` | |
| `resources` | |
| `service.annotations` | |
| `service.enabled` | true |
| `service.type` | ClusterIP |
| `service.externalPort` | 5984 |
| `dns.clusterDomainSuffix` | cluster.local |

## Feedback, Issues, Contributing
## Feedback / Issues / Contributing

General feedback is welcome at our [user][1] or [developer][2] mailing lists.

Apache CouchDB has a [CONTRIBUTING][3] file with details on how to get started
with issue reporting or contributing to the upkeep of this project. In short,
use GitHub Issues, do not report anything on Docker's website.

## Non-Apache CouchDB Development Team Contributors
use GitHub Issues, do not report anything to the Helm team.

- [@natarajaya](
- [@satchpx](
- [@spanato](
- [@jpds](
- [@sebastien-prudhomme](
- [@stepanstipl](
- [@amatas](
- [@Chimney42](
- [@mattjmcnaughton](
- [@mainephd](
- [@AdamDang](
- [@mrtyler](
The chart follows the technical guidelines / best practices [maintained][4] by the Helm team.

@@ -0,0 +1 @@
helm-extra-args: --timeout 800
@@ -0,0 +1,100 @@
#!/usr/bin/env bash

set -o errexit
set -o nounset
set -o pipefail

readonly CT_VERSION=v2.3.3
readonly KIND_VERSION=v0.5.1
readonly CLUSTER_NAME=chart-testing
readonly K8S_VERSION=v1.14.3

run_ct_container() {
echo 'Running ct container...'
docker run --rm --interactive --detach --network host --name ct \
--volume "$(pwd)/test/ct.yaml:/etc/ct/ct.yaml" \
--volume "$(pwd):/workdir" \
--workdir /workdir \

cleanup() {
echo 'Removing ct container...'
docker kill ct > /dev/null 2>&1

kind delete cluster --name "$CLUSTER_NAME" || true

echo 'Done!'

docker_exec() {
docker exec --interactive ct "$@"

create_kind_cluster() {
if ! [ -x "$(command -v kind)" ]; then
echo 'Installing kind...'

curl -sSLo kind "$KIND_VERSION/kind-linux-amd64"
chmod +x kind
sudo mv kind /usr/local/bin/kind

kind delete cluster --name "$CLUSTER_NAME" || true
kind create cluster --name "$CLUSTER_NAME" --config test/kind-config.yaml --image "kindest/node:$K8S_VERSION" --wait 60s

docker_exec mkdir -p /root/.kube

echo 'Copying kubeconfig to container...'
local kubeconfig
kubeconfig="$(kind get kubeconfig-path --name "$CLUSTER_NAME")"
docker cp "$kubeconfig" ct:/root/.kube/config

docker_exec kubectl cluster-info

docker_exec kubectl get nodes

echo 'Cluster ready!'

install_tiller() {
echo 'Installing Tiller...'
docker_exec kubectl --namespace kube-system create sa tiller
docker_exec kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
docker_exec helm init --service-account tiller --upgrade --wait

install_local-path-provisioner() {
# kind doesn't support Dynamic PVC provisioning yet, this is one ways to get it working

# Remove default storage class. It will be recreated by local-path-provisioner
docker_exec kubectl delete storageclass standard

echo 'Installing local-path-provisioner...'
docker_exec kubectl apply -f test/local-path-provisioner.yaml

install_charts() {
docker_exec ct lint-and-install --chart-repos couchdb= --chart-dirs .

main() {
trap cleanup EXIT


Empty file.

0 comments on commit e366671

Please sign in to comment.