Skip to content

Commit

Permalink
Merge branch 'master' into travis-flaky-tests
Browse files Browse the repository at this point in the history
  • Loading branch information
Anand Henry committed Dec 21, 2014
2 parents 58bfe93 + 2e24f76 commit dbece01
Show file tree
Hide file tree
Showing 24 changed files with 286 additions and 241 deletions.
4 changes: 2 additions & 2 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -81,9 +81,9 @@ small_integration_test_files = \
keyspace_test.py \
keyrange_test.py \
mysqlctl.py \
sharded.py \
secure.py \
binlog.py \
sharded.py \
clone.py

medium_integration_test_files = \
Expand All @@ -99,8 +99,8 @@ large_integration_test_files = \
# The following tests are considered too flaky to be included
# in the continous integration test suites
ci_skip_integration_test_files = \
resharding.py \
resharding_bytes.py \
resharding.py \
initial_sharding.py \
initial_sharding_bytes.py \
update_stream.py
Expand Down
1 change: 1 addition & 0 deletions bootstrap.sh
Original file line number Diff line number Diff line change
Expand Up @@ -42,6 +42,7 @@ go get golang.org/x/net/context
go get golang.org/x/tools/cmd/goimports
go get github.com/golang/glog
go get github.com/coreos/go-etcd/etcd
go get github.com/golang/lint/golint

# Packages for uploading code coverage to coveralls.io
go get code.google.com/p/go.tools/cmd/cover
Expand Down
14 changes: 14 additions & 0 deletions docker/etcd/Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
# This is a Dockerfile for etcd that is built on the same base image
# as the Vitess Dockerfile, so the base image can be shared.
#
# This image also contains bash, which is needed for startup scripts,
# such as those found in the Vitess Kubernetes example. The official
# etcd Docker image on quay.io doesn't have any shell in it.
FROM golang:1.3-wheezy

RUN mkdir -p src/github.com/coreos && \
cd src/github.com/coreos && \
curl -sL https://github.com/coreos/etcd/archive/v0.4.6.tar.gz | tar -xz && \
mv etcd-0.4.6 etcd && \
go install github.com/coreos/etcd
CMD ["etcd"]
148 changes: 81 additions & 67 deletions examples/kubernetes/README.md
Original file line number Diff line number Diff line change
@@ -1,69 +1,66 @@
# Vitess on Kubernetes

This directory contains an example configuration for running Vitess on
[Kubernetes](https://github.com/GoogleCloudPlatform/kubernetes/). Refer to the
appropriate [Getting Started Guide](https://github.com/GoogleCloudPlatform/kubernetes/#contents)
[Kubernetes](https://github.com/GoogleCloudPlatform/kubernetes/).
Refer to the appropriate
[Getting Started Guide](https://github.com/GoogleCloudPlatform/kubernetes/tree/master/docs/getting-started-guides)
to get Kubernetes up and running if you haven't already.

## Requirements

This example currently requires Kubernetes 0.4.x.
Later versions have introduced
[incompatible changes](https://groups.google.com/forum/#!topic/kubernetes-announce/idiwm36dN-g)
that break ZooKeeper support. The Kubernetes team plans to support
[ZooKeeper's use case](https://github.com/GoogleCloudPlatform/kubernetes/issues/1802)
again in the future. Until then, please *git checkout* the
[v0.4.3](https://github.com/GoogleCloudPlatform/kubernetes/tree/v0.4.3)
tag (or any newer v0.4.x) in your Kubernetes repository.
This example currently assumes Kubernetes v0.6.x. We recommend downloading a
[binary release](https://github.com/GoogleCloudPlatform/kubernetes/releases).

The easiest way to run the local commands like vtctl is just to install
[Docker](https://www.docker.com/)
on your workstation. You can also adapt the commands below to use a local
[Vitess build](https://github.com/youtube/vitess/blob/master/doc/GettingStarted.md)
by removing the docker preamble if you prefer.

## Starting ZooKeeper
## Starting an etcd cluster for Vitess

Once you have a running Kubernetes deployment, make sure
*kubernetes/cluster/kubecfg.sh* is in your path, and then run:

```
vitess$ examples/kubernetes/zk-up.sh
vitess$ examples/kubernetes/etcd-up.sh
```

This will create a quorum of ZooKeeper servers. You can check the status of the
pods with *kubecfg.sh list pods*, or by using the
This will create two clusters: one for the 'global' cell, and one for the
'test' cell.
You can check the status of the pods with *kubecfg.sh list pods* or by using the
[Kubernetes web interface](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/ux.md).
Note that it may take a while for each minion to download the Docker images the
first time it needs them, during which time the pod status will be *Waiting*.
first time it needs them, during which time the pod status will be *Pending*.

Clients can connect to port 2181 of any
[minion](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/DESIGN.md#cluster-architecture)
(assuming the firewall is set to allow it), and the Kubernetes proxy will
load-balance the connection to any of the servers.
Once your etcd clusters are running, you need to make a record in the global
cell to tell Vitess how to find the other etcd cells. In this case, we only
have one cell named 'test'. Since we don't want to serve etcd on external IPs,
you'll need to SSH into one of your minions and use internal IPs.

A simple way to test out your ZooKeeper deployment is by logging into one of
your minions and running the *zk* client utility inside Docker. For example, if
you are running [Kubernetes on Google Compute Engine](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/getting-started-guides/gce.md):
For example, if you are running
[Kubernetes on Google Compute Engine](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/getting-started-guides/gce.md):

```
# find the service IPs
$ kubecfg.sh list services
Name Labels Selector IP Port
---------- ---------- ---------- ---------- ----------
etcd-test cell=test,name=etcd cell=test,name=etcd 10.0.207.54 4001
etcd-global cell=global,name=etcd cell=global,name=etcd 10.0.86.120 4001
# log in to a minion
$ gcloud compute ssh kubernetes-minion-1
# show zk command usage
kubernetes-minion-1:~$ sudo docker run -ti --rm vitess/base zk
# create a test node in ZooKeeper
kubernetes-minion-1:~$ sudo docker run -ti --rm vitess/base zk -zk.addrs $HOSTNAME:2181 touch -p /zk/test_cell/vt
# check that the node is there
kubernetes-minion-1:~$ sudo docker run -ti --rm vitess/base zk -zk.addrs $HOSTNAME:2181 ls /zk/test_cell
# create a node in the global etcd that points to the 'test' cell
kubernetes-minion-1:~$ curl -L http://10.0.86.120:4001/v2/keys/vt/cells/test -XPUT -d value=http://10.0.207.54:4001
{"action":"set","node":{"key":"/vt/cells/test","value":"http://10.0.207.54:4001","modifiedIndex":9,"createdIndex":9}}
```

To tear down the ZooKeeper deployment (again, with *kubecfg.sh* in your path):
To tear down the etcd deployment (again, with *kubecfg.sh* in your path):

```
vitess$ examples/kubernetes/zk-down.sh
vitess$ examples/kubernetes/etcd-down.sh
```

## Starting vtctld
Expand All @@ -72,44 +69,64 @@ The vtctld server provides a web interface to inspect the state of the system,
and also accepts RPC commands to modify the system.

```
vitess/examples/kubernetes$ kubecfg.sh -c vtctld-service.yaml create services
vitess/examples/kubernetes$ kubecfg.sh -c vtctld-pod.yaml create pods
vitess$ examples/kubernetes/vtctld-up.sh
```

To let you access vtctld from outside Kubernetes, the vtctld service is created
with the createExternalLoadBalancer option. On supported platforms, Kubernetes
will then automatically create an external IP that load balances onto the pods
comprising the service. Note that you also need to open port 15000 in your
firewall.

```
# open port 15000
$ gcloud compute firewall-rules create vtctld --allow tcp:15000
To access vtctld from your workstation, open up port 15000 to any minion in your
firewall. Then get the external address of that minion and visit *http://<minion-addr>:15000/*.
# get the address of the load balancer for vtctld
$ gcloud compute forwarding-rules list
NAME REGION IP_ADDRESS IP_PROTOCOL TARGET
vtctld us-central1 12.34.56.78 TCP us-central1/targetPools/vtctld
# now load up vtctld on http://12.34.56.78:15000/
```

## Issuing commands with vtctlclient

If you've opened port 15000 on your minion's firewall, you can run *vtctlclient*
If you've opened port 15000 on your firewall, you can run *vtctlclient*
locally to issue commands:

```
# check the connection to vtctld, and list available commands
$ sudo docker run -ti --rm vitess/base vtctlclient -server <minion-addr>:15000
$ sudo docker run -ti --rm vitess/base vtctlclient -server 12.34.56.78:15000
# create a global keyspace record
$ sudo docker run -ti --rm vitess/base vtctlclient -server <minion-addr>:15000 CreateKeyspace my_keyspace
$ sudo docker run -ti --rm vitess/base vtctlclient -server 12.34.56.78:15000 CreateKeyspace my_keyspace
```

If you don't want to open the port on the firewall, you can SSH into one of your
minions and perform the above commands against the minion's local Kubernetes proxy.
For example:
minions and perform the above commands against the internal IP for the vtctld
service. For example:

```
# get service IP
$ kubecfg.sh list services
Name Labels Selector IP Port
---------- ---------- ---------- ---------- ----------
vtctld name=vtctld name=vtctld 10.0.12.151 15000
# log in to a minion
$ gcloud compute ssh kubernetes-minion-1
# run a command
kubernetes-minion-1:~$ sudo docker run -ti --rm vitess/base vtctlclient -server $HOSTNAME:15000 CreateKeyspace your_keyspace
kubernetes-minion-1:~$ sudo docker run -ti --rm vitess/base vtctlclient -server 10.0.12.151:15000 CreateKeyspace your_keyspace
```

## Creating a keyspace and shard

This creates the initial paths in the topology server.

```
$ alias vtctl="sudo docker run -ti --rm vitess/base vtctlclient -server <minion-addr>:15000"
$ alias vtctl="sudo docker run -ti --rm vitess/base vtctlclient -server 12.34.56.78:15000"
$ vtctl CreateKeyspace test_keyspace
$ vtctl CreateShard test_keyspace/0
```
Expand All @@ -128,7 +145,7 @@ vitess/examples/kubernetes$ ./vttablet-up.sh
Wait for the pods to enter Running state (*kubecfg.sh list pods*).
Again, this may take a while if a pod was scheduled on a minion that needs to
download the Vitess Docker image. Eventually you should see the tablets show up
in the *DB topology* summary page of vtctld (*http://&lt;minion-addr&gt;:15000/dbtopo*).
in the *DB topology* summary page of vtctld (*http://12.34.56.78:15000/dbtopo*).

### Troubleshooting

Expand Down Expand Up @@ -165,22 +182,12 @@ root@vttablet-101:vt_0000000101# cat error.log
### Viewing vttablet status

Each vttablet serves a set of HTML status pages on its primary port.
The vtctld interface provides links on each tablet entry, but these currently
don't work when running within Kubernetes. Because there is no DNS server in
Kubernetes yet, we can't use the hostname of the pod to find the tablet, since
that hostname is not resolvable outside the pod itself. Also, we use internal
IP addresses to communicate within the cluster because in a typical cloud
environment, network fees are charged differently when instances communicate
on external IPs.

As a result, getting access to a tablet's status page from your workstation
outside the cluster is a bit tricky. Currently, this example assigns a unique
port to every tablet and then publishes that port to the Docker host machine.
For example, the tablet with UID 101 is assigned port 15101. You then have to
look up the external IP of the minion that is running vttablet-101
(via *kubecfg.sh list pods*), and visit
*http://&lt;minion-addr&gt;:15101/debug/status*. You'll of course need access
to these ports from your workstation to be allowed by any firewalls.
The vtctld interface provides links on each tablet entry, but these links are
to internal per-pod IPs that can only be accessed from within Kubernetes.
As a workaround, you can proxy over an SSH connection to a Kubernetes minion,
or launch a proxy as a Kubernetes service.

The status url for each tablet is http://tablet-ip:15002/debug/status

## Starting MySQL replication

Expand All @@ -190,7 +197,7 @@ To do that, we do a forced reparent to the existing master.

```
$ vtctl RebuildShardGraph test_keyspace/0
$ vtctl ReparentShard -force test_keyspace/0 test_cell-0000000100
$ vtctl ReparentShard -force test_keyspace/0 test-0000000100
$ vtctl RebuildKeyspaceGraph test_keyspace
```

Expand All @@ -207,8 +214,8 @@ vitess/examples/kubernetes$ vtctl ApplySchemaKeyspace -simple -sql "$(cat create

Clients send queries to Vitess through vtgate, which routes them to the
correct vttablet(s) behind the scenes. In Kubernetes, we define a vtgate
service (currently using Services v1 on $SERVICE_HOST:15001) that load
balances connections to a pool of vtgate pods curated by a
service with an external IP that load balances connections to a pool of
vtgate pods curated by a
[replication controller](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/replication-controller.md).

```
Expand All @@ -218,12 +225,19 @@ vitess/examples/kubernetes$ ./vtgate-up.sh
## Creating a client app

The file *client.py* contains a simple example app that connects to vtgate
and executes some queries. Assuming you have opened firewall access from
your workstation to port 15001, you can run it locally and point it at any
minion:
and executes some queries.

```
$ sudo docker run -ti --rm vitess/base bash -c '$VTTOP/examples/kubernetes/client.py --server=<minion-addr>:15001'
# open vtgate port
$ gcloud compute firewall-rules create vtgate --allow tcp:15001
# get external IP for vtgate service
$ gcloud compute forwarding-rules list
NAME REGION IP_ADDRESS IP_PROTOCOL TARGET
vtgate us-central1 123.123.123.123 TCP us-central1/targetPools/vtgate
# run client.py
$ sudo docker run -ti --rm vitess/base bash -c '$VTTOP/examples/kubernetes/client.py --server=123.123.123.123:15001'
Inserting into master...
Reading from master...
(1L, 'V is for speed')
Expand Down
7 changes: 3 additions & 4 deletions examples/kubernetes/client.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,6 @@

# Constants and params
UNSHARDED = [keyrange.KeyRange(keyrange_constants.NON_PARTIAL_KEYRANGE)]
cursorclass = vtgate_cursor.VTGateCursor

# Parse args
parser = argparse.ArgumentParser()
Expand All @@ -26,13 +25,13 @@

# Read topology
# This is a temporary work-around until the VTGate V2 client is topology-free.
topoconn = zkocc.ZkOccConnection(args.server, 'test_cell', args.timeout)
topoconn = zkocc.ZkOccConnection(args.server, 'test', args.timeout)
topology.read_topology(topoconn)
topoconn.close()

# Insert something.
print('Inserting into master...')
cursor = conn.cursor(cursorclass, conn, 'test_keyspace', 'master',
cursor = conn.cursor('test_keyspace', 'master',
keyranges=UNSHARDED, writable=True)
cursor.begin()
cursor.execute(
Expand All @@ -52,7 +51,7 @@
# Read from a replica.
# Note that this may be behind master due to replication lag.
print('Reading from replica...')
cursor = conn.cursor(cursorclass, conn, 'test_keyspace', 'replica',
cursor = conn.cursor('test_keyspace', 'replica',
keyranges=UNSHARDED)
cursor.execute('SELECT * FROM test_table', {})
for row in cursor.fetchall():
Expand Down
29 changes: 29 additions & 0 deletions examples/kubernetes/etcd-controller-template.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
apiVersion: v1beta1
kind: ReplicationController
id: etcd-{{cell}}
desiredState:
replicas: 3
replicaSelector:
name: etcd
cell: {{cell}}
podTemplate:
desiredState:
manifest:
version: v1beta1
id: etcd-{{cell}}
containers:
- name: etcd
image: vitess/etcd:v0.4.6
command:
- bash
- "-c"
- >-
ipaddr=$(hostname -i)
etcd -name $HOSTNAME -peer-addr $ipaddr:7001 -addr $ipaddr:4001 -discovery {{discovery}}
labels:
name: etcd
cell: {{cell}}
labels:
name: etcd
cell: {{cell}}
17 changes: 17 additions & 0 deletions examples/kubernetes/etcd-down.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
#!/bin/bash

# This is an example script that tears down the etcd servers started by
# etcd-up.sh. It assumes that kubernetes/cluster/kubecfg.sh is in the path.

# Delete replication controllers
for cell in 'global' 'test'; do
echo "Deleting pods created by etcd replicationController for $cell cell..."
kubecfg.sh stop etcd-$cell

echo "Deleting etcd replicationController for $cell cell..."
kubecfg.sh delete replicationControllers/etcd-$cell

echo "Deleting etcd service for $cell cell..."
kubecfg.sh delete services/etcd-$cell
done

11 changes: 11 additions & 0 deletions examples/kubernetes/etcd-service-template.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
apiVersion: v1beta1
kind: Service
id: etcd-{{cell}}
port: 4001
containerPort: 4001
selector:
name: etcd
cell: {{cell}}
labels:
name: etcd
cell: {{cell}}

0 comments on commit dbece01

Please sign in to comment.