Skip to content
This repository has been archived by the owner on Apr 4, 2023. It is now read-only.

Commit

Permalink
Move multi-az doc into sphinx
Browse files Browse the repository at this point in the history
  • Loading branch information
kragniz committed Mar 6, 2018
1 parent 4c0b122 commit 0321d33
Show file tree
Hide file tree
Showing 2 changed files with 88 additions and 87 deletions.
88 changes: 88 additions & 0 deletions docs/cassandra.rst
Original file line number Diff line number Diff line change
Expand Up @@ -8,3 +8,91 @@ Example ``CassandraCluster`` resource:

.. include:: quick-start/cassandra-cluster.yaml
:literal:

Cassandra Across Multiple Availability Zones
--------------------------------------------

With rack awareness
~~~~~~~~~~~~~~~~~~~

Navigator supports running Cassandra with
`rack and datacenter-aware replication <https://docs.datastax.com/en/cassandra/latest/cassandra/architecture/archDataDistributeReplication.html>`_
To deploy this, you must run a ``nodePool`` in each availability zone, and mark each as a separate Cassandra rack.

The
`nodeSelector <(https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector>`_
field of a nodePool allows scheduling the nodePool to a set of nodes matching labels.
This should be used with a node label such as
`failure-domain.beta.kubernetes.io/zone <https://kubernetes.io/docs/reference/labels-annotations-taints/#failure-domainbetakubernetesiozone>`_.

The ``datacenter`` and ``rack`` fields mark all Cassandra nodes in a nodepool as being located in that datacenter and rack.
This information can then be used with the
`NetworkTopologyStrategy <http://cassandra.apache.org/doc/latest/architecture/dynamo.html#network-topology-strategy>`_
keyspace replica placement strategy.
If these are not specified, Navigator will select an appropriate name for each: ``datacenter`` defaults to a static value, and ``rack`` defaults to the nodePool's name.

As an example, the nodePool section of a CassandraCluster spec for deploying into GKE in europe-west1 with rack awareness enabled:

.. code-block:: yaml
nodePools:
- name: "np-europe-west1-b"
replicas: 3
datacenter: "europe-west1"
rack: "europe-west1-b"
nodeSelector:
failure-domain.beta.kubernetes.io/zone: "europe-west1-b"
persistence:
enabled: true
size: "5Gi"
storageClass: "default"
- name: "np-europe-west1-c"
replicas: 3
datacenter: "europe-west1"
rack: "europe-west1-c"
nodeSelector:
failure-domain.beta.kubernetes.io/zone: "europe-west1-c"
persistence:
enabled: true
size: "5Gi"
storageClass: "default"
- name: "np-europe-west1-d"
replicas: 3
datacenter: "europe-west1"
rack: "europe-west1-d"
nodeSelector:
failure-domain.beta.kubernetes.io/zone: "europe-west1-d"
persistence:
enabled: true
size: "5Gi"
storageClass: "default"
Without rack awareness
~~~~~~~~~~~~~~~~~~~~~~

Since the default rack name is equal to the nodepool name,
simply set the rack name to the same static value in each nodepool to disable rack awareness.

A simplified example:

.. code-block:: yaml
nodePools:
- name: "np-europe-west1-b"
replicas: 3
datacenter: "europe-west1"
rack: "default-rack"
nodeSelector:
failure-domain.beta.kubernetes.io/zone: "europe-west1-b"
- name: "np-europe-west1-c"
replicas: 3
datacenter: "europe-west1"
rack: "default-rack"
nodeSelector:
failure-domain.beta.kubernetes.io/zone: "europe-west1-c"
- name: "np-europe-west1-d"
replicas: 3
datacenter: "europe-west1"
rack: "default-rack"
nodeSelector:
failure-domain.beta.kubernetes.io/zone: "europe-west1-d"
87 changes: 0 additions & 87 deletions docs/cassandra/multi-az.md

This file was deleted.

0 comments on commit 0321d33

Please sign in to comment.