Skip to content

Commit

Permalink
Merge branch 'master' of github.com:zalando/spilo
Browse files Browse the repository at this point in the history
  • Loading branch information
Alexander Kukushkin committed May 18, 2015
2 parents db87063 + ff45ee4 commit 05d0a7b
Show file tree
Hide file tree
Showing 5 changed files with 101 additions and 21 deletions.
2 changes: 1 addition & 1 deletion docs/README.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
Introduction
============
Spilo (სფილო, small elephants) is a Highly Available PostgreSQL cluster (HA-cluster). It will run a number of clusters with one cluster being a master and the others being slaves. Its purpose is to provide a very resilient, highly available PostgreSQL cluster which can be configured and started within minutes.
Spilo (სპილო, elephant) is a Highly Available PostgreSQL cluster (HA-cluster). It will run a number of clusters with one cluster being a master and the others being slaves. Its purpose is to provide a very resilient, highly available PostgreSQL cluster which can be configured and started within minutes.
79 changes: 79 additions & 0 deletions docs/User's Guide/Deployment.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,79 @@
Prerequisites
=============

You will need a VPC and be allowed to create new infrastructure.

To deploy spilo to AWS you will need the tooling from [stups](http://stups.readthedocs.org/en/latest).
The gist of it is to:

* have an AWS account with enough rights for deployment
* install Python 3.4
* install `stups-mai`
* install `stups-senza`

Before going any further ensure you can login to your VPC using the `stups` tooling.

Deploying etcd using senza
==========================

Deploying etcd should be a once in a VPC-lifetime thing. We can use `senza` to deploy the etcd appliance.
If you already have other spilo-instances running, you may reuse the already running etcd-appliance.

The following prerequisites need to be met.

* `stups-senza`
* A [Senza Definition](http://stups.readthedocs.org/en/latest/components/senza.html#senza-definition), use the provided `etcd-appliance.yaml` as an example.
* The Security Group defined in the Senza Definition needs to be created

To deploy the etcd-appliance, use the following:

senza create DEFINITION.yaml VERSION HOSTED_ZONE DOCKER_IMAGE

This will create and execute a cloud formation template for you.

Example:

Argument | Value
-------------------|-------
Definition | etcd-appliance.yaml
Hosted zone | repository.example.com
Version | 1
Docker repository | docker.registry.example.com
Docker image | repository/etcd-appliance
Image tag | 0.2-SNAPSHOT

senza create etcd-cluster.yaml 1 repository.example.com docker.registry.example.com/repository/etcd-appliance:0.2-SNAPSHOT

Deploying Spilo using senza
===========================

* Have some global idea about the usage characteristics of the appliance
* Have unique name for the cluster

You can use senza init to create a senza definition for the spilo appliance,
the `ETCD_DISCOVERY_URL` should point to `HOSTED_ZONE` from the etcd-appliance that you want to use.

senza init DEFINITION.yaml

Choose postgresapp as the template, senza will now prompt you for some information, you may want to override the defaults.

If you want, now is the time to edit `DEFINITION.yaml` to your needs.

To deploy the appliance using senza, do the following (we use `CLUSTER_NAME` for the `VERSION` that senza requires):

senza create [OPTIONS] DEFINITION.yaml CLUSTER_NAME DOCKER_IMAGE

Example:

Argument | Value
-------------------|-------
Definition | spilo.yaml
Cluster Name | pompeii
Docker repository | docker.registry.example.com
Docker image | repository/spilo
Image tag | 0.7-SNAPSHOT

senza create spilo.yaml pompeii docker.registry.example.com/repository/spilo:0.7-SNAPSHOT

You can now monitor the progress using:
senza watch -n 2 DEFINITION.yaml CLUSTER_NAME
12 changes: 4 additions & 8 deletions postgres-appliance/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -33,8 +33,7 @@ RUN usermod -d /home/postgres -m postgres

# install WAL-e
RUN apt-get install build-essential -y
RUN apt-get install python-pip -y
RUN apt-get install python-dev libxml2-dev libxslt-dev libffi-dev lzop pv -y
RUN apt-get install python-pip python-dev libxml2-dev libxslt-dev libffi-dev lzop pv daemontools -y
RUN pip install six --upgrade
RUN pip install wal-e

Expand All @@ -43,18 +42,15 @@ ENV ETCDVERSION 2.0.9
RUN apt-get install curl -y
RUN curl -L https://github.com/coreos/etcd/releases/download/v${ETCDVERSION}/etcd-v${ETCDVERSION}-linux-amd64.tar.gz -o etcd-v${ETCDVERSION}-linux-amd64.tar.gz && tar vzxf etcd-v${ETCDVERSION}-linux-amd64.tar.gz && cp etcd-v${ETCDVERSION}-linux-amd64/etcd* /bin/

# install haproxy
RUN apt-get install haproxy

ENV PATH $PATH:/usr/lib/postgresql-${PGVERSION}/bin

ADD postgres_ha.sh /home/postgres/
RUN chown postgres:postgres /home/postgres/postgres_ha.sh
RUN chown postgres:postgres /home/postgres -R
RUN chmod 700 /home/postgres/postgres_ha.sh

ENV ETCD_ADDRESS=""
ENV ETCD_DISCOVERY_URL postgres.acid.zalan.do
ENV ETCD_DISCOVERY_URL postgres.acid.example.com
ENV SCOPE test
ENV WAL_S3_BUCKET spilo-example-com
ENV DEBUG 0
# run subsequent commands as user postgres
USER postgres
Expand Down
2 changes: 1 addition & 1 deletion postgres-appliance/TODO
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ Done:
Done:
run etcd proxy instead of full-scale etcd

Cancelled, since we are not going to use haproxye:
Cancelled, since we are not going to use haproxy:
configure haproxy to fetch information from
the etcd-proxy and point to the acting master

Expand Down
27 changes: 16 additions & 11 deletions postgres-appliance/postgres_ha.sh
Original file line number Diff line number Diff line change
@@ -1,20 +1,21 @@
#!/bin/bash

PATH=$PATH:/usr/lib/postgresql/${PGVERSION}/bin
WALE_ENV_DIR=/home/postgres/etc/wal-e.d/env

function write_postgres_yaml
{
local_address=$(cat /etc/hosts |grep ${HOSTNAME}|cut -f1)
aws_private_ip=$(curl -s http://169.254.169.254/latest/meta-data/local-ipv4)
cat >> postgres.yml <<__EOF__
loop_wait: 10
aws_use_host_address: "on"
etcd:
scope: $SCOPE
ttl: 30
host: 127.0.0.1:8080
postgresql:
name: postgresql_${HOSTNAME}
listen: ${local_address}:5432
listen: 0.0.0.0:5432
connect_address: ${aws_private_ip}:5432
data_dir: $PGDATA
replication:
username: standby
Expand All @@ -28,31 +29,35 @@ postgresql:
parameters:
archive_mode: "on"
wal_level: hot_standby
archive_command: /bin/true
archive_command: "envdir ${WALE_ENV_DIR} wal-e --aws-instance-profile wal-push \"%p\" -p 1"
max_wal_senders: 5
wal_keep_segments: 8
archive_timeout: 1800s
max_replication_slots: 5
hot_standby: "on"
recovery_conf:
restore_command: "envdir ${WALE_ENV_DIR} wal-e --aws-instance-profile wal-fetch \"%f\" \"%p\" -p 1"
__EOF__
}

function write_archive_command_environment
{
mkdir -p ${WALE_ENV_DIR}
echo "s3://${WAL_S3_BUCKET}/spilo/${SCOPE}/wal/" > ${WALE_ENV_DIR}/WALE_S3_PREFIX
}

# get governor code
git clone https://github.com/zalando/governor.git

write_postgres_yaml

write_archive_command_environment

# start etcd proxy
# for the -proxy on TDB the url of the etcd cluster
[ "$DEBUG" -eq 1 ] && exec /bin/bash

if [[ -n $ETCD_ADDRESS ]]
then
# address is still useful for local debugging
etcd -name "proxy-$SCOPE" -proxy on -bind-addr 127.0.0.1:8080 --data-dir=etcd -initial-cluster $ETCD_ADDRESS &
else
etcd -name "proxy-$SCOPE" -proxy on -bind-addr 127.0.0.1:8080 --data-dir=etcd -discovery-srv $ETCD_DISCOVERY_URL &
fi
etcd -name "proxy-$SCOPE" -proxy on -bind-addr 127.0.0.1:8080 --data-dir=etcd -discovery-srv $ETCD_DISCOVERY_URL &

exec governor/governor.py "/home/postgres/postgres.yml"

Expand Down

0 comments on commit 05d0a7b

Please sign in to comment.