The pattern deploys CockroachDB across multi OpenShift clusters that are spread across different geographic regions and hosted on clouds like Azure, AWS or GKE. It deploys a CockroachDB StatefulSet into each separate cluster, and links them by deploying Submariner add-on on the hub cluster.
If you've followed a link to this repository, but are not really sure what it contains or how to use it, head over to Multicloud GitOps for additional context and installation instructions.
- An OpenShift cluster (Go to the OpenShift console). See also sizing your cluster.
- A GitHub account (and, optionally, a token for it with repositories permissions, to read from and write to your forks)
- The helm binary, see here
- Atleast two OpenShift clusters deployed in different regions or across different clouds. Clusters should be deployed with different IP CIDR ranges.
Cluster | Pod CIDR | Service CIDR |
---|---|---|
cluster1 | 10.128.0.0/14 | 172.30.0.0/16 |
cluster2 | 10.132.0.0/14 | 172.31.0.0/16 |
cluster3 | 10.140.0.0/14 | 172.32.0.0/16 |
ACM does not support configuring Submariner add-on for OpenShift clusters deployed in Azure cloud. Additonal steps are required to configure Submariner on Azure cluster. Before deploying cockroachdb-pattern ensure the following steps have been completed for Azure clusters.
Submariner Gateway nodes need to be able to accept traffic over UDP ports (4500 and 4490 by default). Submariner also uses UDP port 4800 to encapsulate traffic from the worker and master nodes to the Gateway nodes, and TCP port 8080 to retrieve metrics from the Gateway nodes. Additionally, the default OpenShift deployment does not allow assigning an elastic public IP to existing worker nodes, which may be necessary on one end of the tunnel connection.
subctl cloud prepare is a command designed to update your OpenShift installer provisioned infrastructure for Submariner deployments, handling the requirements specified above.
Run the command for cluster1:
az ad sp create-for-rbac --sdk-auth > my.auth
export KUBECONFIG=cluster1/auth/kubeconfig
subctl cloud prepare azure --ocp-metadata cluster1/metadata.json --auth-file my.auth
For more information on how to prepare Azure OpenShift cluster for Submariner deployment, refer to the submariner documentation.
If you do not have a running Red Hat OpenShift cluster, you can start one on a public or private cloud by using Red Hat's cloud service.
-
Fork the cockroachdb-pattern repository on GitHub. It is necessary to fork because your fork will be updated as part of the GitOps and DevOps processes.
-
Clone the forked copy of this repository.
git clone git@github.com:your-username/cockroachdb-pattern.git
-
Create a local copy of the Helm values file that can safely include credentials
DO NOT COMMIT THIS FILE
You do not want to push personal credentials to GitHub.
cp values-secret.yaml.template ~/values-secret.yaml vi ~/values-secret.yaml
-
Customize the deployment for your cluster
git checkout -b my-branch vi values-global.yaml git add values-global.yaml git commit values-global.yaml git push origin my-branch
-
You can deploy the pattern using the validated pattern operator. If you do use the operator then skip to Validating the Environment below.
-
Preview the changes
make show
-
Login to your hub cluster using oc login or exporting the KUBECONFIG
oc login
or set KUBECONFIG to the path to your
kubeconfig
file. For example:export KUBECONFIG=~/my-ocp-env/hub/auth/kubconfig
-
Apply the changes to your cluster
make install
-
Check the operators have been installed
OpenShift Console Web UI -> Installed Operators
-
Check all applications are synchronised Under the project
cockroachdb-pattern-hub
click on the URL for thehub
gitopsserver
. The Vault application is not synched. -
Check all the managed clusters have been imported. Go to the routes and search for
multi
withinAll Projects
. Click the link to launch ACM console. Under the clusters verify if all the managed clusters have been imported. Click on Cluster add-ons and verify if the Submariner add-on has been installed. -
Login to your managed cluster
cluster1
using oc login or exporting the KUBECONFIG as described in step 1. -
Select the project cockroachdb
oc project cockroachdb
-
Check the pods are running and the crete-certs and init-cockroachdb-xxxxx have completed.
NAME READY STATUS RESTARTS AGE cockroachdb-0 1/1 Running 0 77s cockroachdb-1 1/1 Running 0 77s cockroachdb-2 1/1 Running 0 77s cockroachdb-client-secure 1/1 Running 0 77s create-certs 0/1 Completed 0 77s init-cockroachdb-jhnns 0/1 Completed 0 77s
-
Verify if the cockroach db is replicated across clusters.
a. Launch cockroachdb cmdLine
kubectl exec -it cockroachdb-client-secure -- ./cockroach sql --certs-dir=/cockroach-certs --host=cockroachdb-public
b. Create DB, tables and populate data
CREATE DATABASE IF NOT EXISTS foo; CREATE TABLE IF NOT EXISTS foo.bar (k STRING PRIMARY KEY, v STRING); UPSERT INTO foo.bar VALUES ('Kuber', 'netes'), ('Cockroach', 'DB'); SELECT CONCAT(k, v) FROM foo.bar;
Output:
root@cockroachdb-public:26257/defaultdb> SELECT CONCAT(k, v) FROM foo.bar; concat --------------- CockroachDB Kubernetes (2 rows)
c. Login to second managed cluster
cluster2
using oc login or exporting the KUBECONFIG as described above.d. Select the project cockroachdb
oc project cockroachdb
e. Launch cockroachdb cmdLine
kubectl exec -it cockroachdb-client-secure -- ./cockroach sql --certs-dir=/cockroach-certs --host=cockroachdb-public
f. Verify the table and data is replicated
SELECT CONCAT(k, v) FROM foo.bar;
Output:
root@cockroachdb-public:26257/defaultdb> SELECT CONCAT(k, v) FROM foo.bar; concat --------------- CockroachDB Kubernetes (2 rows)