Permalink
Switch branches/tags
v2.2.0-alpha.00000000 v2.1.0-beta.20181015 v2.1.0-beta.20181008 v2.1.0-beta.20181001 v2.1.0-beta.20180924 v2.1.0-beta.20180917 v2.1.0-beta.20180910 v2.1.0-beta.20180904 v2.1.0-beta.20180827 v2.1.0-alpha.20180730 v2.1.0-alpha.20180702 v2.1.0-alpha.20180604 v2.1.0-alpha.20180507 v2.1.0-alpha.20180416 v2.1.0-alpha.00000000 v2.0.6 v2.0.6-rc.1 v2.0.5 v2.0.4 v2.0.3 v2.0.2 v2.0.1 v2.0.0 v2.0-rc.1 v2.0-beta.20180326 v2.0-beta.20180319 v2.0-beta.20180312 v2.0-beta.20180305 v2.0-alpha.20180212 v2.0-alpha.20180129 v2.0-alpha.20180122 v2.0-alpha.20180116 v2.0-alpha.20171218 v2.0-alpha.20171218-plus-left-join-fix v1.2-alpha.20171211 v1.2-alpha.20171204 v1.2-alpha.20171113 v1.2-alpha.20171026 v1.2-alpha.20170901 v1.1.9 v1.1.9-rc.1 v1.1.8 v1.1.7 v1.1.6 v1.1.5 v1.1.4 v1.1.3 v1.1.2 v1.1.1 v1.1.0 v1.1.0-rc.1 v1.1-beta.20170928 v1.1-beta.20170921 v1.1-beta.20170907 v1.1-alpha.20170817 v1.1-alpha.20170810 v1.1-alpha.20170803 v1.1-alpha.20170720 v1.1-alpha.20170713 v1.1-alpha.20170629 v1.1-alpha.20170622 v1.1-alpha.20170608 v1.1-alpha.20170601 v1.0.7 v1.0.6 v1.0.5 v1.0.4 v1.0.3 v1.0.2 v1.0.1 v1.0 v1.0-rc.3 v1.0-rc.2 v1.0-rc.1 v0.1-alpha beta-20170420 beta-20170413 beta-20170406 beta-20170330 beta-20170323 beta-20170309 beta-20170223 beta-20170216 beta-20170209 beta-20170126 beta-20170112 beta-20170105 beta-20161215 beta-20161208 beta-20161201 beta-20161110 beta-20161103 beta-20161027 beta-20161013 beta-20161006 beta-20160929 beta-20160915 beta-20160908 beta-20160829 beta-20160728
Nothing to show
Find file Copy path
Fetching contributors…
Cannot retrieve contributors at this time
87 lines (69 sloc) 4.69 KB

Running CockroachDB across multiple Kubernetes clusters

The script and configuration files in this directory enable deploying CockroachDB across multiple Kubernetes clusters that are spread across different geographic regions. It deploys a CockroachDB StatefulSet into each separate cluster, and links them together using DNS.

To use the configuration provided here, check out this repository (or otherwise download a copy of this directory), fill in the constants at the top of setup.py with the relevant information about your Kubernetes clusters, optionally make any desired modifications to cockroachdb-statefulset-secure.yaml as explained in our Kubernetes performance tuning guide, then finally run setup.py.

You should see a lot of output as it does its thing, hopefully ending after printing out job "cluster-init-secure" created. This implies that everything was created successfully, and you should soon see the CockroachDB cluster initialized with 3 pods in the "READY" state in each Kubernetes cluster. At this point you can manage the StatefulSet in each cluster independently if you so desire, scaling up the number of replicas, changing their resource requests, or making other modifications as you please.

If anything goes wrong along the way, please let us know via any of the normal troubleshooting channels. While we believe this creates a highly available, maintainable multi-region deployment, it is still pushing the boundaries of how Kubernetes is typically used, so feedback and issue reports are very appreciated.

Limitations

Pod-to-pod connectivity

The deployment outlined in this directory relies on pod IP addresses being routable even across Kubernetes clusters and regions. This achieves optimal performance, particularly when compared to alternative solutions that route all packets between clusters through load balancers, but means that it won't work in certain environments.

This requirement is satisfied by clusters deployed in cloud environments such as Google Kubernetes Engine, and can also be satsified by on-prem environments depending on the Kubernetes networking setup used. If you want to test whether your cluster will work, you can run this basic network test:

$ kubectl run network-test --image=alpine --restart=Never -- sleep 999999
pod "network-test" created
$ kubectl describe pod network-test | grep IP
IP:           THAT-PODS-IP-ADDRESS
$ kubectl config use-context YOUR-OTHER-CLUSTERS-CONTEXT-HERE
$ kubectl run -it network-test --image=alpine --restart=Never -- ping THAT-PODS-IP-ADDRESS
If you don't see a command prompt, try pressing enter.
64 bytes from 10.12.14.10: seq=1 ttl=62 time=0.570 ms
64 bytes from 10.12.14.10: seq=2 ttl=62 time=0.449 ms
64 bytes from 10.12.14.10: seq=3 ttl=62 time=0.635 ms
64 bytes from 10.12.14.10: seq=4 ttl=62 time=0.722 ms
64 bytes from 10.12.14.10: seq=5 ttl=62 time=0.504 ms
...

If the pods can directly connect, you should see successful ping output like the above. If they can't, you won't see any successful ping responses. Make sure to delete the network-test pod in each cluster when you're done!

Exposing DNS servers to the Internet

As currently configured, the way that the DNS servers from each Kubernetes cluster are hooked together is by exposing them via a load balanced IP address that's visible to the public Internet. This is because Google Cloud Platform's Internal Load Balancers do not currently support clients in one region using a load balancer in another region.

None of the services in your Kubernetes cluster will be made accessible, but their names could leak out to a motivated attacker. If this is unacceptable, please let us know and we can demonstrate other options. Your voice could also help convince Google to allow clients from one region to use an Internal Load Balancer in another, eliminating the problem.

Cleaning up

To remove all the resources created in your clusters by setup.py, copy the parameters you provided at the top of setup.py to the top of teardown.py and run teardown.py.

More information

For more information on running CockroachDB in Kubernetes, please see the README in the parent directory.