Skip to content

Commit

Permalink
Merge pull request #8 from zalando/feature/no-etcd-proxy
Browse files Browse the repository at this point in the history
Don't start etcd proxy
  • Loading branch information
CyberDem0n committed Jun 25, 2015
2 parents 5bf436c + 87fd47c commit 2dd4bef
Show file tree
Hide file tree
Showing 10 changed files with 20 additions and 227 deletions.
2 changes: 1 addition & 1 deletion docs/user-guide/deploy_spilo.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ Prerequisites
* The S3 mint bucket needs to be added to the Access Control

You can use senza init to create a senza definition for the spilo appliance,
the `ETCD_DISCOVERY_URL` should point to `HOSTED_ZONE` from the etcd-appliance that you want to use.
the `ETCD_DISCOVERY_DOMAIN` should point to `HOSTED_ZONE` from the etcd-appliance that you want to use.

senza init DEFINITION.yaml

Expand Down
2 changes: 1 addition & 1 deletion etcd-cluster-appliance/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@ FROM zalando/ubuntu:14.04.1-1

ENV USER etcd
ENV HOME /home/${USER}
ENV ETCDVERSION 2.0.11
ENV ETCDVERSION 2.0.12

## Install python
RUN apt-get update && apt-get -y install python python-boto
Expand Down
9 changes: 9 additions & 0 deletions etcd-cluster-appliance/etcd-cluster.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,14 @@ SenzaComponents:
source: '{{Arguments.DockerImage}}'
environment:
HOSTED_ZONE: '{{Arguments.HostedZone}}'
mounts:
/home/etcd:
partition: none
filesystem: tmpfs
erase_on_boot: false
options: size=512m
mint_bucket: '{{Arguments.MintBucket}}'
scalyr_account_key: '{{Arguments.ScalyrAccountKey}}'
Type: Senza::TaupageAutoScalingGroup
AutoScaling:
Minimum: 3
Expand All @@ -29,6 +36,8 @@ SenzaInfo:
Description: Docker image of etcd-cluster.
- MintBucket:
Description: The mint S3 bucket for OAuth 2.0 credentials
- ScalyrAccountKey:
Description: scalyr account key
StackName: etcd-cluster
Resources:
EtcdRole:
Expand Down
6 changes: 3 additions & 3 deletions postgres-appliance/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -27,8 +27,8 @@ RUN apt-get install postgresql-${PGVERSION} postgresql-${PGVERSION}-dbg postgres
# Remove the default cluster, which Debian stupidly starts right after installation of the packages
RUN pg_dropcluster --stop ${PGVERSION} main

# install psycopg2 and requests (for governor)
RUN apt-get install python-psycopg2 -y
# install psycopg2 and dnspython (for governor)
RUN apt-get install python-psycopg2 python-dnspython -y

# Set PGHOME as a login directory for the PostgreSQL user.
RUN usermod -d $PGHOME -m postgres
Expand Down Expand Up @@ -60,7 +60,7 @@ RUN chown postgres:postgres $PGHOME -R
RUN chown postgres:postgres /postgres_ha.sh
RUN chmod 700 /postgres_ha.sh

ENV ETCD_DISCOVERY_URL postgres.acid.example.com
ENV ETCD_DISCOVERY_DOMAIN postgres.acid.example.com
ENV SCOPE test
ENV WAL_S3_BUCKET spilo-example-com
ENV DEBUG 0
Expand Down
2 changes: 1 addition & 1 deletion postgres-appliance/README
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ This is an early stage image not meant for production yet.

One can run a HA cluster with 3 nodes using senza, i.e.

senza create spilo.yaml cluster_name etcd_discovery_url docker_image_path:version,
senza create spilo.yaml cluster_name etcd_discovery_domain docker_image_path:version,
i.e. to create a cluster called lambda:

senza create spilo.yaml lambda etcd.acid.example.com yourname/spilo:0.6-SNAPSHOT
Expand Down
16 changes: 1 addition & 15 deletions postgres-appliance/postgres_ha.sh
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ restapi:
etcd:
scope: $SCOPE
ttl: 30
host: 127.0.0.1:${etcd_client_port}
discovery_srv: ${ETCD_DISCOVERY_DOMAIN}
postgresql:
name: postgresql_${HOSTNAME}
listen: 0.0.0.0:${pg_port}
Expand Down Expand Up @@ -74,20 +74,6 @@ write_postgres_yaml

write_archive_command_environment

function noterm
{
echo "Received TERM signal, but not doing anything"
}

# resurrect etcd if it's gone
(
trap noterm SIGTERM
while true
do
etcd -name "proxy-$SCOPE" -proxy on --data-dir=etcd -discovery-srv $ETCD_DISCOVERY_URL
done
) &

# run wal-e s3 backup periodically
(
INITIAL=1
Expand Down
96 changes: 0 additions & 96 deletions postgres-appliance/spilo.yaml

This file was deleted.

38 changes: 0 additions & 38 deletions postgres-appliance/test.yaml

This file was deleted.

69 changes: 0 additions & 69 deletions postgres-appliance/test.ymal

This file was deleted.

7 changes: 4 additions & 3 deletions spilo.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -35,9 +35,10 @@ SenzaComponents:
ports:
5432: 5432
8008: 8008
etcd_discovery_domain: "postgres.acid.example.com"
environment:
SCOPE: "{{Arguments.version}}"
ETCD_DISCOVERY_URL: "postgres.acid.zalan.do"
ETCD_DISCOVERY_DOMAIN: "postgres.acid.example.com"
WAL_S3_BUCKET: "zalando-spilo-app"
root: True
Resources:
Expand All @@ -46,8 +47,8 @@ Resources:
Properties:
Type: CNAME
TTL: 20
HostedZoneName: acid.zalan.do.
Name: "{{Arguments.version}}.acid.zalan.do."
HostedZoneName: acid.example.com.
Name: "{{Arguments.version}}.acid.example.com."
ResourceRecords:
- Fn::GetAtt:
- PostgresLoadBalancer
Expand Down

0 comments on commit 2dd4bef

Please sign in to comment.