Skip to content
This repository has been archived by the owner on Jan 15, 2021. It is now read-only.

Commit

Permalink
Initial commit.
Browse files Browse the repository at this point in the history
Signed-off-by: Anthony Yeh <enisoc@google.com>
  • Loading branch information
enisoc committed May 22, 2018
1 parent 6014588 commit 6562148
Show file tree
Hide file tree
Showing 18 changed files with 1,707 additions and 2 deletions.
131 changes: 129 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,2 +1,129 @@
# vitess-operator
Vitess Operator provides automation that simplifies the administration of Vitess clusters on Kubernetes.
# Vitess Operator

The Vitess Operator provides automation that simplifies the administration
of [Vitess](https://vitess.io) clusters on Kubernetes.

The Operator installs a custom resource for objects of the custom type
VitessCluster.
This custom resource allows you to configure the high-level aspects of
your Vitess deployment, while the details of how to run Vitess on Kubernetes
are abstracted and automated.

## Vitess Components

A typical VitessCluster object might expand to the following tree once it's
fully deployed.
Objects in **bold** are custom resource kinds defined by this Operator.

* **VitessCluster**: The top-level specification for a Vitess cluster.
This is the only one the user creates.
* **VitessCell**: Each Vitess [cell](https://vitess.io/overview/concepts/#cell-data-center)
represents an independent failure domain (e.g. a Zone or Availability Zone).
* EtcdCluster ([etcd-operator](https://github.com/coreos/etcd-operator)):
Vitess needs its own etcd cluster to coordinate its built-in load-balancing
and automatic shard routing.
* Deployment ([orchestrator](https://github.com/github/orchestrator)):
An optional automated failover tool that works with Vitess.
* Deployment ([vtctld](https://vitess.io/overview/#vtctld)):
A pool of stateless Vitess admin servers, which serve a dashboard UI as well
as being an endpoint for the Vitess CLI tool (vtctlclient).
* Deployment ([vtgate](https://vitess.io/overview/#vtgate)):
A pool of stateless Vitess query routers.
The client application can use any one of these vtgate Pods as the entry
point into Vitess, through a MySQL-compatible interface.
* **VitessKeyspace** (db1): Each Vitess [keyspace](https://vitess.io/overview/concepts/#keyspace)
is a logical database that may be composed of many MySQL databases (shards).
* **VitessShard** (db1/0): Each Vitess [shard](https://vitess.io/overview/concepts/#shard)
is a single-master tree of replicating MySQL instances.
* Pod(s) ([vttablet](https://vitess.io/overview/#vttablet)): Within a shard, there may be many Vitess [tablets](https://vitess.io/overview/concepts/#tablet)
(individual MySQL instances).
VitessShard acts like an app-specific replacement for StatefulSet,
creating both Pods and PersistentVolumeClaims.
* PersistentVolumeClaim(s)
* **VitessShard** (db1/1)
* Pod(s) (vttablet)
* PersistentVolumeClaim(s)
* **VitessKeyspace** (db2)
* **VitessShard** (db2/0)
* Pod(s) (vttablet)
* PersistentVolumeClaim(s)

## Prerequisites

* Kubernetes 1.8+ is required for its improved CRD support, especially garbage
collection.
* This config currently requires a dynamic PersistentVolume provisioner and a
default StorageClass.
* The example `my-vitess.yaml` config results in a lot of Pods.
If the Pods don't schedule due to resource limits, you can try lowering the
limits, lowering `replicas` values, or removing the `batch` config under
`tablets`.
* Install [Metacontroller](https://github.com/GoogleCloudPlatform/metacontroller).
* Install [etcd-operator](https://github.com/coreos/etcd-operator) in the
namespace where you plan to create a VitessCluster.

## Deploy the Operator

You can technically install the Operator into any namespace,
but the references in this example are hard-coded to `vitess` because some
explicit namespace must be specified to ensure the webhooks can be reached
across namespaces.

Note that once the Operator is installed, you can create VitessCluster
objects in any namespace.
The example below loads `my-vitess.yaml` into the default namespace for your
kubectl context.
That's the namespace where etcd-operator also needs to be enabled,
not necessarily the `vitess` namespace.

```sh
kubectl create namespace vitess
kubectl create configmap vitess-operator-hooks -n vitess --from-file=hooks
kubectl apply -f vitess-operator.yaml
```

### Create a VitessCluster

```sh
kubectl apply -f my-vitess.yaml
```

### View the Vitess Dashboard

Wait until the cluster is ready:

```sh
kubectl get vitessclusters -o 'custom-columns=NAME:.metadata.name,READY:.status.conditions[?(@.type=="Ready")].status'
```

You should see:

```console
NAME READY
vitess True
```

Start a kubectl proxy:

```sh
kubectl proxy --port=8001
```

Then visit:

```
http://localhost:8001/api/v1/namespaces/default/services/vitess-global-vtctld:web/proxy/app/
```

### Clean Up

```sh
# Delete the VitessCluster object.
kubectl delete -f my-vitess.yaml
# Uninstall the Vitess Operator.
kubectl delete -f vitess-operator.yaml
kubectl delete -n vitess configmap vitess-operator-hooks
# Delete the namespace for the Vitess Operator,
# assuming you created it just for this example.
kubectl delete namespace vitess
```
26 changes: 26 additions & 0 deletions hooks/etcd.libsonnet
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
local k8s = import "k8s.libsonnet";
local metacontroller = import "metacontroller.libsonnet";

{
local etcd = self,

apiVersion: "etcd.database.coreos.com/v1beta2",

// EtcdClusters
clusters(observed, specs)::
metacontroller.collection(observed, specs, etcd.apiVersion, "EtcdCluster", etcd.cluster),

// Create/update an EtcdCluster child for a VitessCell parent.
cluster(observed, spec):: {
apiVersion: etcd.apiVersion,
kind: "EtcdCluster",
metadata: {
name: observed.parent.metadata.name + "-etcd",
labels: observed.parent.spec.template.metadata.labels,
},
spec: {
version: spec.version,
size: spec.size,
}
},
}
68 changes: 68 additions & 0 deletions hooks/k8s.libsonnet
Original file line number Diff line number Diff line change
@@ -0,0 +1,68 @@
// Library for working with Kubernetes objects.
{
local k8s = self,

// Fill in a conventional status condition object.
condition(type, status):: {
type: type,
status:
if std.type(status) == "string" then (
status
) else if std.type(status) == "boolean" then (
if status then "True" else "False"
) else (
"Unknown"
),
},

// Extract the status of a given condition type.
// Returns null if the condition doesn't exist.
conditionStatus(obj, type)::
if obj != null && "status" in obj && "conditions" in obj.status then
// Filter conditions with matching "type" field.
local matches = [
cond.status for cond in obj.status.conditions if cond.type == type
];
// Take the first one, if any.
if std.length(matches) > 0 then matches[0] else ""
else
null,

// Returns only the objects from a given list that have the
// "Ready" condition set to "True".
filterReady(list)::
std.filter(function(x) self.conditionStatus(x, "Ready") == "True", list),

// Returns only the objects from a given list that have the
// "Available" condition set to "True".
filterAvailable(list)::
std.filter(function(x) self.conditionStatus(x, "Available") == "True", list),

// Returns whether the object matches the given label values.
matchLabels(obj, labels)::
local keys = std.objectFields(labels);

"metadata" in obj && "labels" in obj.metadata &&
[
obj.metadata.labels[k]
for k in keys if k in obj.metadata.labels
]
==
[labels[k] for k in keys],

// Get the value of a label from object metadata.
// Returns null if the label doesn't exist.
getLabel(obj, key)::
if "metadata" in obj && "labels" in obj.metadata && key in obj.metadata.labels then
obj.metadata.labels[key]
else
null,

// Get the value of an annotation from object metadata.
// Returns null if the annotation doesn't exist.
getAnnotation(obj, key)::
if "metadata" in obj && "annotations" in obj.metadata && key in obj.metadata.annotations then
obj.metadata.annotations[key]
else
null,
}
51 changes: 51 additions & 0 deletions hooks/metacontroller.libsonnet
Original file line number Diff line number Diff line change
@@ -0,0 +1,51 @@
local k8s = import "k8s.libsonnet";

// Library for working with Metacontroller.
{
local metacontroller = self,

// Extend a metacontroller request object with extra fields and functions.
observed(request):: request + {
children+: {
// Get a map of children of a given kind, by child name.
getMap(apiVersion, kind)::
self[kind + "." + apiVersion],

// Get a list of children of a given kind.
getList(apiVersion, kind)::
local map = self.getMap(apiVersion, kind);
[map[key] for key in std.objectFields(map)],

// Get a child object of a given kind and name.
get(apiVersion, kind, name)::
local map = self.getMap(apiVersion, kind);
if name in map then map[name] else null,
},
},

// Helpers for managing spec, observed, and desired states
// for a collection of objects of a given Kind.
collection(observed, specs, apiVersion, kind, desired):: {
specs: if specs != null then specs else [],

observed: observed.children.getList(apiVersion, kind),
desired: [
{apiVersion: apiVersion, kind: kind} + desired(observed, spec)
for spec in self.specs
],

getObserved(name): observed.children.get(apiVersion, kind, name),
},

// Mix-in for collection that filters observed objects.
// This may be needed if a given parent has multiple collections of children
// of the same Kind.
collectionFilter(filter):: {
observed: std.filter(filter, super.observed),
},

// Convert an integer string in the given base to "int" (actually double).
// Should be precise up to 2^53.
// This function is defined as a native extension in jsonnetd.
parseInt(intStr, base):: std.native("parseInt")(intStr, base),
}
Loading

0 comments on commit 6562148

Please sign in to comment.