Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added Galera example to kubernetes. #13729

Merged
merged 1 commit into from
Oct 5, 2015
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
170 changes: 170 additions & 0 deletions examples/mysql-galera/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,170 @@
<!-- BEGIN MUNGE: UNVERSIONED_WARNING -->

<!-- BEGIN STRIP_FOR_RELEASE -->

<img src="http://kubernetes.io/img/warning.png" alt="WARNING"
width="25" height="25">
<img src="http://kubernetes.io/img/warning.png" alt="WARNING"
width="25" height="25">
<img src="http://kubernetes.io/img/warning.png" alt="WARNING"
width="25" height="25">
<img src="http://kubernetes.io/img/warning.png" alt="WARNING"
width="25" height="25">
<img src="http://kubernetes.io/img/warning.png" alt="WARNING"
width="25" height="25">

<h2>PLEASE NOTE: This document applies to the HEAD of the source tree</h2>

If you are using a released version of Kubernetes, you should
refer to the docs that go with that version.

<strong>
The latest 1.0.x release of this document can be found
[here](http://releases.k8s.io/release-1.0/examples/mysql-galera/README.md).

Documentation for other releases can be found at
[releases.k8s.io](http://releases.k8s.io).
</strong>
--

<!-- END STRIP_FOR_RELEASE -->

<!-- END MUNGE: UNVERSIONED_WARNING -->

## Galera Replication for MySQL on Kubernetes

This document explains a simple demonstration example of running MySQL synchronous replication using Galera, specifically, Percona XtraDB cluster. The example is simplistic and used a fixed number (3) of nodes but the idea can be built upon and made more dynamic as Kubernetes matures.

### Prerequisites

This example assumes that you have a Kubernetes cluster installed and running, and that you have installed the ```kubectl``` command line tool somewhere in your path. Please see the [getting started](../../docs/getting-started-guides/) for installation instructions for your platform.

Also, this example requires the image found in the ```image``` directory. For your convenience, it is built and available on Docker's public image repository as ```capttofu/percona_xtradb_cluster_5_6```. It can also be built which would merely require that the image in the pod or replication controller files is updated.

This example was tested on OS X with a Galera cluster running on VMWare using the fine repo developed by Paulo Pires [https://github.com/pires/kubernetes-vagrant-coreos-cluster] and client programs built for OS X.

### Basic concept

The basic idea is this: three replication controllers with a single pod, corresponding services, and a single overall service to connect to all three nodes. One of the important design goals of MySQL replication and/or clustering is that you don't want a single-point-of-failure, hence the need to distribute each node or slave across hosts or even geographical locations. Kubernetes is well-suited for facilitating this design pattern using the service and replication controller configuration files in this example.

By defaults, there are only three pods (hence replication controllers) for this cluster. This number can be increased using the variable NUM_NODES, specified in the replication controller configuration file. It's important to know the number of nodes must always be odd.

When the replication controller is created, it results in the corresponding container to start, run an entrypoint script that installs the mysql system tables, set up users, and build up a list of servers that is used with the galera parameter ```wsrep_cluster_address```. This is a list of running nodes that galera uses for election of a node to obtain SST (Single State Transfer) from.

Note: Kubernetes best-practices is to pre-create the services for each controller, and the configuration files which contain the service and replication controller for each node, when created, will result in both a service and replication contrller running for the given node. An important thing to know is that it's important that initally pxc-node1.yaml be processed first and no other pxc-nodeN services that don't have corresponding replication controllers should exist. The reason for this is that if there is a node in ```wsrep_clsuter_address``` without a backing galera node there will be nothing to obtain SST from which will cause the node to shut itself down and the container in question to exit (and another soon relaunched, repeatedly).

First, create the overall cluster service that will be used to connect to the cluster:

```kubectl create -f examples/mysql-galera/pxc-cluster-service.yaml```

Create the service and replication controller for the first node:

```kubectl create -f examples/mysql-galera/pxc-node1.yaml```

### Create services and controllers for the remaining nodes

Repeat the same previous steps for ```pxc-node2``` and ```pxc-node3```

When complete, you should be able connect with a mysql client to the IP address
service ```pxc-cluster``` to find a working cluster

### An example of creating a cluster

Shown below are examples of Using ```kubectl``` from within the ```./examples/mysql-galera``` directory, the status of the lauched replication controllers and services can be confirmed

```
$ kubectl create -f examples/mysql-galera/pxc-cluster-service.yaml
services/pxc-cluster

$ kubectl create -f examples/mysql-galera/pxc-node1.yaml
services/pxc-node1
replicationcontrollers/pxc-node1

$ kubectl create -f examples/mysql-galera/pxc-node2.yaml
services/pxc-node2
replicationcontrollers/pxc-node2

$ kubectl create -f examples/mysql-galera/pxc-node3.yaml
services/pxc-node3
replicationcontrollers/pxc-node3

```

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(nit) Isn't this code block repeating what you just said in the previous section?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cleaned that up as I wanted a an actual output of it being run and explained as such


### Confirm a running cluster

Verify everything is running:

```
$ kubectl get rc,pods,services
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
pxc-node1 pxc-node1 capttofu/percona_xtradb_cluster_5_6:beta name=pxc-node1 1
pxc-node2 pxc-node2 capttofu/percona_xtradb_cluster_5_6:beta name=pxc-node2 1
pxc-node3 pxc-node3 capttofu/percona_xtradb_cluster_5_6:beta name=pxc-node3 1
NAME READY STATUS RESTARTS AGE
pxc-node1-h6fqr 1/1 Running 0 41m
pxc-node2-sfqm6 1/1 Running 0 41m
pxc-node3-017b3 1/1 Running 0 40m
NAME LABELS SELECTOR IP(S) PORT(S)
pxc-cluster <none> unit=pxc-cluster 10.100.179.58 3306/TCP
pxc-node1 <none> name=pxc-node1 10.100.217.202 3306/TCP
4444/TCP
4567/TCP
4568/TCP
pxc-node2 <none> name=pxc-node2 10.100.47.212 3306/TCP
4444/TCP
4567/TCP
4568/TCP
pxc-node3 <none> name=pxc-node3 10.100.200.14 3306/TCP
4444/TCP
4567/TCP
4568/TCP

```

The cluster should be ready for use!

### Connecting to the cluster

Using the name of ```pxc-cluster``` service running interactively using ```kubernetes exec```, it is possible to connect to any of the pods using the mysql client on the pod's container to verify the cluster size, which should be ```3```. In this example below, pxc-node3 replication controller is chosen, and to find out the pod name, ```kubectl get pods``` and ```awk``` are employed:

```
$ kubectl get pods|grep pxc-node3|awk '{ print $1 }'
pxc-node3-0b5mc

$ kubectl exec pxc-node3-0b5mc -i -t -- mysql -u root -p -h pxc-cluster

Enter password:
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 5
Server version: 5.6.24-72.2-56-log Percona XtraDB Cluster (GPL), Release rel72.2, Revision 43abf03, WSREP version 25.11, wsrep_25.11

Copyright (c) 2009-2015 Percona LLC and/or its affiliates
Copyright (c) 2000, 2015, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> show status like 'wsrep_cluster_size';
+--------------------+-------+
| Variable_name | Value |
+--------------------+-------+
| wsrep_cluster_size | 3 |
+--------------------+-------+
1 row in set (0.06 sec)

```

At this point, there is a working cluster that can begin being used via the pxc-cluster service IP address!

### TODO

This setup certainly can become more fluid and dynamic. One idea is to perhaps use an etcd container to store information about node state. Originally, there was a read-only kubernetes API available to each container but that has since been removed. Also, Kelsey Hightower is working on moving the functionality of confd to Kubernetes. This could replace the shell duct tape that builds the cluster configuration file for the image.



<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/examples/mysql-galera/README.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->
42 changes: 42 additions & 0 deletions examples/mysql-galera/image/Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,42 @@
FROM ubuntu:trusty

# add our user and group first to make sure their IDs get assigned
# consistently, regardless of whatever dependencies get added
RUN groupadd -r mysql && useradd -r -g mysql mysql

ENV PERCONA_XTRADB_VERSION 5.6
ENV MYSQL_VERSION 5.6
ENV TERM linux

RUN apt-get update
RUN DEBIAN_FRONTEND=noninteractive apt-get install -y perl --no-install-recommends && rm -rf /var/lib/apt/lists/*

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you batch this apt-get install command with the apt-get install below, or is there a reason they need to be separate.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I'll do that.


RUN apt-key adv --keyserver keys.gnupg.net --recv-keys 1C4CBDCDCD2EFD2A

RUN echo "deb http://repo.percona.com/apt trusty main" > /etc/apt/sources.list.d/percona.list
RUN echo "deb-src http://repo.percona.com/apt trusty main" >> /etc/apt/sources.list.d/percona.list

# the "/var/lib/mysql" stuff here is because the mysql-server
# postinst doesn't have an explicit way to disable the
# mysql_install_db codepath besides having a database already
# "configured" (ie, stuff in /var/lib/mysql/mysql)
# also, we set debconf keys to make APT a little quieter
RUN { \
echo percona-server-server-5.6 percona-server-server/data-dir select ''; \
echo percona-server-server-5.6 percona-server-server/root_password password ''; \
} | debconf-set-selections \
&& apt-get update && DEBIAN_FRONTEND=nointeractive apt-get install -y percona-xtradb-cluster-client-"${MYSQL_VERSION}" \
percona-xtradb-cluster-common-"${MYSQL_VERSION}" percona-xtradb-cluster-server-"${MYSQL_VERSION}" \
&& rm -rf /var/lib/apt/lists/* \
&& rm -rf /var/lib/mysql && mkdir -p /var/lib/mysql && chown -R mysql:mysql /var/lib/mysql

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Make this line a separate RUN line, since it represents a different operation (and has it's own comment)

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This entrypoint script I'm trying to follow as closely as possible (copied from that image) how the official docker/mysql image sets up MySQL and I think their intend was to make this somewhat "atomic".

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see. For this and elsewhere, if you're following a "canonical" example for doing things, you can probably disregard my comments.


VOLUME /var/lib/mysql

COPY my.cnf /etc/mysql/my.cnf
COPY cluster.cnf /etc/mysql/conf.d/cluster.cnf

COPY docker-entrypoint.sh /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]

EXPOSE 3306 4444 4567 4568
CMD ["mysqld"]
12 changes: 12 additions & 0 deletions examples/mysql-galera/image/cluster.cnf
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
[mysqld]

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is this file read by? I can't find any references to it.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's included by mysql in /etc/mysql/conf.d and sets up galera replication. I keep it separate from top-level my.cnf as best practices.


wsrep_provider=/usr/lib/libgalera_smm.so
wsrep_cluster_address=gcomm://
binlog_format=ROW
default_storage_engine=InnoDB
innodb_autoinc_lock_mode=2

wsrep_sst_method=xtrabackup-v2
wsrep_node_address=127.0.0.1
wsrep_cluster_name=galera_kubernetes
wsrep_sst_auth=sstuser:changethis
164 changes: 164 additions & 0 deletions examples/mysql-galera/image/docker-entrypoint.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,164 @@
#!/bin/bash

# Copyright 2015 The Kubernetes Authors All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

#
# This script does the following:
#
# 1. Sets up database privileges by building an SQL script
# 2. MySQL is initially started with this script a first time
# 3. Modify my.cnf and cluster.cnf to reflect available nodes to join
#

# if NUM_NODES not passed, default to 3
if [ -z "$NUM_NODES" ]; then
NUM_NODES=3
fi

if [ "${1:0:1}" = '-' ]; then
set -- mysqld "$@"
fi

# if the command passed is 'mysqld' via CMD, then begin processing.
if [ "$1" = 'mysqld' ]; then
# read DATADIR from the MySQL config
DATADIR="$("$@" --verbose --help 2>/dev/null | awk '$1 == "datadir" { print $2; exit }')"

# only check if system tables not created from mysql_install_db and permissions
# set with initial SQL script before proceding to build SQL script
if [ ! -d "$DATADIR/mysql" ]; then
# fail if user didn't supply a root password
if [ -z "$MYSQL_ROOT_PASSWORD" -a -z "$MYSQL_ALLOW_EMPTY_PASSWORD" ]; then
echo >&2 'error: database is uninitialized and MYSQL_ROOT_PASSWORD not set'
echo >&2 ' Did you forget to add -e MYSQL_ROOT_PASSWORD=... ?'
exit 1
fi

# mysql_install_db installs system tables
echo 'Running mysql_install_db ...'
mysql_install_db --datadir="$DATADIR"
echo 'Finished mysql_install_db'

# this script will be run once when MySQL first starts to set up
# prior to creating system tables and will ensure proper user permissions
tempSqlFile='/tmp/mysql-first-time.sql'
cat > "$tempSqlFile" <<-EOSQL
DELETE FROM mysql.user ;
CREATE USER 'root'@'%' IDENTIFIED BY '${MYSQL_ROOT_PASSWORD}' ;
GRANT ALL ON *.* TO 'root'@'%' WITH GRANT OPTION ;
EOSQL

if [ "$MYSQL_DATABASE" ]; then
echo "CREATE DATABASE IF NOT EXISTS \`$MYSQL_DATABASE\` ;" >> "$tempSqlFile"
fi

if [ "$MYSQL_USER" -a "$MYSQL_PASSWORD" ]; then
echo "CREATE USER '$MYSQL_USER'@'%' IDENTIFIED BY '$MYSQL_PASSWORD' ;" >> "$tempSqlFile"

if [ "$MYSQL_DATABASE" ]; then
echo "GRANT ALL ON \`$MYSQL_DATABASE\`.* TO '$MYSQL_USER'@'%' ;" >> "$tempSqlFile"
fi
fi

# Add SST (Single State Transfer) user if Clustering is turned on
if [ -n "$GALERA_CLUSTER" ]; then
# this is the Single State Transfer user (SST, initial dump or xtrabackup user)
WSREP_SST_USER=${WSREP_SST_USER:-"sst"}
if [ -z "$WSREP_SST_PASSWORD" ]; then
echo >&2 'error: Galera cluster is enabled and WSREP_SST_PASSWORD is not set'
echo >&2 ' Did you forget to add -e WSREP_SST__PASSWORD=... ?'
exit 1
fi
# add single state transfer (SST) user privileges
echo "CREATE USER '${WSREP_SST_USER}'@'localhost' IDENTIFIED BY '${WSREP_SST_PASSWORD}';" >> "$tempSqlFile"
echo "GRANT RELOAD, LOCK TABLES, REPLICATION CLIENT ON *.* TO '${WSREP_SST_USER}'@'localhost';" >> "$tempSqlFile"
fi

echo 'FLUSH PRIVILEGES ;' >> "$tempSqlFile"

# Add the SQL file to mysqld's command line args
set -- "$@" --init-file="$tempSqlFile"
fi

chown -R mysql:mysql "$DATADIR"
fi

# if cluster is turned on, then procede to build cluster setting strings
# that will be interpolated into the config files
if [ -n "$GALERA_CLUSTER" ]; then
# this is the Single State Transfer user (SST, initial dump or xtrabackup user)
WSREP_SST_USER=${WSREP_SST_USER:-"sst"}
if [ -z "$WSREP_SST_PASSWORD" ]; then
echo >&2 'error: database is uninitialized and WSREP_SST_PASSWORD not set'
echo >&2 ' Did you forget to add -e WSREP_SST_PASSWORD=xxx ?'
exit 1
fi

# user/password for SST user
sed -i -e "s|^wsrep_sst_auth=sstuser:changethis|wsrep_sst_auth=${WSREP_SST_USER}:${WSREP_SST_PASSWORD}|" /etc/mysql/conf.d/cluster.cnf

# set nodes own address
WSREP_NODE_ADDRESS=`ip addr show | grep -E '^[ ]*inet' | grep -m1 global | awk '{ print $2 }' | sed -e 's/\/.*//'`
if [ -n "$WSREP_NODE_ADDRESS" ]; then
sed -i -e "s|^wsrep_node_address=.*$|wsrep_node_address=${WSREP_NODE_ADDRESS}|" /etc/mysql/conf.d/cluster.cnf
fi

# if the string is not defined or it only is 'gcomm://', this means bootstrap
if [ -z "$WSREP_CLUSTER_ADDRESS" -o "$WSREP_CLUSTER_ADDRESS" == "gcomm://" ]; then
# if empty, set to 'gcomm://'
# NOTE: this list does not imply membership.
# It only means "obtain SST and join from one of these..."
if [ -z "$WSREP_CLUSTER_ADDRESS" ]; then
WSREP_CLUSTER_ADDRESS="gcomm://"
fi

# loop through number of nodes
for NUM in `seq 1 $NUM_NODES`; do
NODE_SERVICE_HOST="PXC_NODE${NUM}_SERVICE_HOST"

# if set
if [ -n "${!NODE_SERVICE_HOST}" ]; then
# if not its own IP, then add it
if [ $(expr "$HOSTNAME" : "pxc-node${NUM}") -eq 0 ]; then
# if not the first bootstrap node add comma
if [ $WSREP_CLUSTER_ADDRESS != "gcomm://" ]; then
WSREP_CLUSTER_ADDRESS="${WSREP_CLUSTER_ADDRESS},"
fi
# append
# if user specifies USE_IP, use that
if [ -n "${USE_IP}" ]; then
WSREP_CLUSTER_ADDRESS="${WSREP_CLUSTER_ADDRESS}"${!NODE_SERVICE_HOST}
# otherwise use DNS
else
WSREP_CLUSTER_ADDRESS="${WSREP_CLUSTER_ADDRESS}pxc-node${NUM}"
fi
fi
fi
done
fi

# WSREP_CLUSTER_ADDRESS is now complete and will be interpolated into the
# cluster address string (wsrep_cluster_address) in the cluster
# configuration file, cluster.cnf
if [ -n "$WSREP_CLUSTER_ADDRESS" -a "$WSREP_CLUSTER_ADDRESS" != "gcomm://" ]; then
sed -i -e "s|^wsrep_cluster_address=gcomm://|wsrep_cluster_address=${WSREP_CLUSTER_ADDRESS}|" /etc/mysql/conf.d/cluster.cnf
fi
fi

# random server ID needed
sed -i -e "s/^server\-id=.*$/server-id=${RANDOM}/" /etc/mysql/my.cnf

# finally, start mysql
exec "$@"