title | summary | toc | toc_not_nested |
---|---|---|---|
Deploy CockroachDB on Google Cloud Platform GCE |
Learn how to deploy CockroachDB on Google Cloud Platform's Compute Engine. |
false |
true |
This page shows you how to manually deploy a multi-node CockroachDB cluster on Google Cloud Platform's Compute Engine (GCE).
If you plan to use CockroachDB in production, we recommend using a secure cluster (documented on this page). However, if you are not concerned with protecting your data with SSL encryption, you can use the Insecure instructions below.
<style> .filters .scope-button { width: 20%; height: 65px; margin: 30px 15px 10px 0px; } .filters a:hover { border-bottom: none; } </style>Locally, you must have CockroachDB installed, which you'll use to generate and manage your deployment's certificates.
In GCE, you must have SSH access to each machine with root or sudo privileges. This is necessary for distributing binaries and starting CockroachDB.
Decide how you want to access your Admin UI:
- Only from specific IP addresses, which requires you to set firewall rules to allow communication on port
8080
(documented on this page). - Using an SSH tunnel, which requires you to use
--http-host=localhost
when starting your nodes.
For guidance on cluster topology, clock synchronization, and file descriptor limits, see Recommended Production Settings.
{{site.data.alerts.callout_success}}Terraform users can deploy CockroachDB using the configuration files and instructions in the our GitHub repo's gce
directory.{{site.data.alerts.end}}
CockroachDB requires TCP communication on two ports:
- 26257 (
tcp:26257
) for inter-node communication (i.e., working as a cluster) and connecting with applications - 8080 (
tcp:8080
) for exposing your Admin UI
Inter-node communication works by default using your GCE instances' internal IP addresses, which allow communication with other instances on CockroachDB's default port 26257
. However, to accept data from applications external to GCE and expose your admin UI, you need to create firewall rules for your project.
When creating firewall rules, we recommend using Google Cloud Platform's tag feature, which lets you specify that you want to apply the rule only to instance that include the same tag.
Field | Recommended Value |
---|---|
Name | cockroachadmin |
Source filter | IP ranges |
Source IP ranges | Your local network's IP ranges |
Allowed protocols... | tcp:8080 |
Target tags | cockroachdb |
{{site.data.alerts.callout_success}}If your application is also hosted on GCE, you won't need to create a firewall rule for your application to communicate with your instances hosting CockroachDB.{{site.data.alerts.end}}
Field | Recommended Value |
---|---|
Name | cockroachapp |
Source filter | IP ranges |
Source IP ranges | Your application's IP ranges |
Allowed protocols... | tcp:26257 |
Target tags | cockroachdb |
To connect your application to CockroachDB, use a PostgreSQL wire protocol driver.
Create an instance for each node you plan to have in your cluster. We recommend:
- Running at least 3 nodes to ensure survivability.
- Selecting the same continent for all of your instances for best performance.
If you used a tag for your firewall rules, when you create the instance, select Management, disk, networking, SSH keys. Then on the Management tab, in the Tags field, enter cockroachdb.
Locally, you'll need to create the following certificates and keys:
- A certificate authority (CA) key pair (
ca.cert
andca.key
) - A client key pair for the
root
user - A node key pair for each node, issued to its IP addresses and any common names the machine uses
{{site.data.alerts.callout_success}}Before beginning, it's useful to collect each of your machine's internal and external IP addresses, as well as any server names you want to issue certificates for.{{site.data.alerts.end}}
-
Create a
certs
directory:$ mkdir certs
-
Create the CA key pair:
$ cockroach cert create-ca \ --ca-cert=certs/ca.cert \ --ca-key=certs/ca.key
-
Create a client key pair for the
root
user:$ cockroach cert create-client \ root \ --ca-cert=certs/ca.cert \ --ca-key=certs/ca.key \ --cert=certs/root.cert \ --key=certs/root.key
-
For each node, a create a node key pair issued for all common names you might use to refer to the node, including:
<node internal IP address>
which is the instance's Internal IP.<node external IP address>
which is the instance's External IP address.<node hostname>
which is the instance's Name.<other common names for node>
which include any domain names you point to the instance.localhost
and127.0.0.1
$ cockroach cert create-node \ <node internal IP address> \ <node external IP address> \ <node hostname> \ <other common names for node> \ localhost \ 127.0.0.1 \ --ca-cert=certs/ca.cert \ --ca-key=certs/ca.key \ --cert=certs/<node name>.cert \ --key=certs/<node name>.key
-
Upload the certificates to each node:
# Create the certs directory: $ ssh <username>@<node external IP address> "mkdir certs" # Upload the CA certificate, client (root) certificate and key, and node certificate and key: $ scp certs/ca.cert \ certs/root.cert \ certs/root.key \ certs/<node name>.cert \ certs/<node name>.key \ <username>@<node external IP address>:~/certs
-
SSH to your instance:
$ ssh <username>@<node1 external IP address>
-
Install the latest CockroachDB binary:
# Get the latest CockroachDB tarball. $ wget https://binaries.cockroachdb.com/cockroach-latest.linux-amd64.tgz # Extract the binary. $ tar -xf cockroach-latest.linux-amd64.tgz \ --strip=1 cockroach-latest.linux-amd64/cockroach # Move the binary. $ sudo mv cockroach /usr/local/bin
-
Start a new CockroachDB cluster with a single node, specifying the location of certificates and the address at which other nodes can reach it:
$ cockroach start --background \ --ca-cert=certs/ca.cert \ --cert=certs/<node1 name>.cert \ --key=certs/<node1 name>.key \ --advertise-host=<node1 internal IP address>
At this point, your cluster is live and operational but contains only a single node. Next, scale your cluster by setting up additional nodes that will join the cluster.
-
SSH to your instance:
$ ssh <username>@<additional node external IP address>
-
Install the latest CockroachDB binary:
# Get the latest CockroachDB tarball. $ wget https://binaries.cockroachdb.com/cockroach-latest.linux-amd64.tgz # Extract the binary. $ tar -xf cockroach-latest.linux-amd64.tgz \ --strip=1 cockroach-latest.linux-amd64/cockroach # Move the binary. $ sudo mv cockroach /usr/local/bin
-
Start a new node that joins the cluster using the first node's internal IP address:
$ cockroach start --background \ --ca-cert=certs/ca.cert \ --cert=certs/<node name>.cert \ --key=certs/<node name>.key \ --advertise-host=<node internal IP address> \ --join=<node1 internal IP address>:26257
Repeat these steps for each instance you want to use as a node.
To test your distributed, multi-node cluster, access the built-in SQL client and create a new database. That database will then be accessible from all of the nodes in your cluster.
-
SSH to your first node:
$ ssh <username>@<node1 external IP address>
-
Launch the built-in SQL client and create a database:
$ cockroach sql --ca-cert=certs/ca.cert --cert=certs/root.cert --key=certs/root.key
{{site.data.alerts.callout_info}}When issuing
cockroach
commands on secure clusters, you must include flags for theca-cert
, as well as the client'scert
andkey
.{{site.data.alerts.end}}> CREATE DATABASE securenodetest;
-
In another terminal window, SSH to another node:
$ ssh <username>@<node3 external IP address>
-
Launch the built-in SQL client:
$ cockroach sql --ca-cert=certs/ca.cert --cert=certs/root.cert --key=certs/root.key
-
View the cluster's databases, which will include
securenodetest
:> SHOW DATABASE;
+----------------+ | DATABASE | +----------------+ | securenodetest | +----------------+
View your cluster's Admin UI by going to https://<any node's external IP address>:8080
.
{{site.data.alerts.callout_info}}Note that your browser will consider the CockroachDB-created certificate invalid; you’ll need to click through a warning message to get to the UI.{{site.data.alerts.end}}
On this page, go to the following tabs on the left:
- Nodes to ensure all of your nodes successfully joined the cluster.
- Databases to ensure
securenodetest
is listed.
{% include prometheus-callout.html %}
Now that your deployment is working, you can: