Skip to content

Commit

Permalink
Merge pull request #1610 from cockroachdb/cloud-migration
Browse files Browse the repository at this point in the history
Cloud migration tutorial
  • Loading branch information
Jesse Seldess committed Jun 27, 2017
2 parents 7d0fc54 + 5ae031f commit 0520d55
Show file tree
Hide file tree
Showing 26 changed files with 251 additions and 2 deletions.
6 changes: 6 additions & 0 deletions _includes/sidebar-data.json
Expand Up @@ -133,6 +133,12 @@
"urls": [
"/demo-automatic-rebalancing.html"
]
},
{
"title": "Cloud Migration",
urls: [
"/demo-cloud-migration.html"
]
}
]
}
Expand Down
1 change: 1 addition & 0 deletions build-a-c++-app-with-cockroachdb.md
Expand Up @@ -70,3 +70,4 @@ You might also be interested in using a local cluster to explore the following c
- [Data Replication](demo-data-replication.html)
- [Fault Tolerance & Recovery](demo-fault-tolerance-and-recovery.html)
- [Automatic Rebalancing](demo-automatic-rebalancing.html)
- [Cloud Migration](demo-cloud-migration.html)
1 change: 1 addition & 0 deletions build-a-clojure-app-with-cockroachdb.md
Expand Up @@ -105,3 +105,4 @@ You might also be interested in using a local cluster to explore the following c
- [Data Replication](demo-data-replication.html)
- [Fault Tolerance & Recovery](demo-fault-tolerance-and-recovery.html)
- [Automatic Rebalancing](demo-automatic-rebalancing.html)
- [Cloud Migration](demo-cloud-migration.html)
1 change: 1 addition & 0 deletions build-a-go-app-with-cockroachdb-gorm.md
Expand Up @@ -93,3 +93,4 @@ You might also be interested in using a local cluster to explore the following c
- [Data Replication](demo-data-replication.html)
- [Fault Tolerance & Recovery](demo-fault-tolerance-and-recovery.html)
- [Automatic Rebalancing](demo-automatic-rebalancing.html)
- [Cloud Migration](demo-cloud-migration.html)
1 change: 1 addition & 0 deletions build-a-go-app-with-cockroachdb.md
Expand Up @@ -111,3 +111,4 @@ You might also be interested in using a local cluster to explore the following c
- [Data Replication](demo-data-replication.html)
- [Fault Tolerance & Recovery](demo-fault-tolerance-and-recovery.html)
- [Automatic Rebalancing](demo-automatic-rebalancing.html)
- [Cloud Migration](demo-cloud-migration.html)
1 change: 1 addition & 0 deletions build-a-java-app-with-cockroachdb-hibernate.md
Expand Up @@ -103,3 +103,4 @@ You might also be interested in using a local cluster to explore the following c
- [Data Replication](demo-data-replication.html)
- [Fault Tolerance & Recovery](demo-fault-tolerance-and-recovery.html)
- [Automatic Rebalancing](demo-automatic-rebalancing.html)
- [Cloud Migration](demo-cloud-migration.html)
1 change: 1 addition & 0 deletions build-a-java-app-with-cockroachdb.md
Expand Up @@ -75,3 +75,4 @@ You might also be interested in using a local cluster to explore the following c
- [Data Replication](demo-data-replication.html)
- [Fault Tolerance & Recovery](demo-fault-tolerance-and-recovery.html)
- [Automatic Rebalancing](demo-automatic-rebalancing.html)
- [Cloud Migration](demo-cloud-migration.html)
1 change: 1 addition & 0 deletions build-a-nodejs-app-with-cockroachdb-sequelize.md
Expand Up @@ -92,3 +92,4 @@ You might also be interested in using a local cluster to explore the following c
- [Data Replication](demo-data-replication.html)
- [Fault Tolerance & Recovery](demo-fault-tolerance-and-recovery.html)
- [Automatic Rebalancing](demo-automatic-rebalancing.html)
- [Cloud Migration](demo-cloud-migration.html)
1 change: 1 addition & 0 deletions build-a-nodejs-app-with-cockroachdb.md
Expand Up @@ -114,3 +114,4 @@ You might also be interested in using a local cluster to explore the following c
- [Data Replication](demo-data-replication.html)
- [Fault Tolerance & Recovery](demo-fault-tolerance-and-recovery.html)
- [Automatic Rebalancing](demo-automatic-rebalancing.html)
- [Cloud Migration](demo-cloud-migration.html)
1 change: 1 addition & 0 deletions build-a-php-app-with-cockroachdb.md
Expand Up @@ -81,3 +81,4 @@ You might also be interested in using a local cluster to explore the following c
- [Data Replication](demo-data-replication.html)
- [Fault Tolerance & Recovery](demo-fault-tolerance-and-recovery.html)
- [Automatic Rebalancing](demo-automatic-rebalancing.html)
- [Cloud Migration](demo-cloud-migration.html)
1 change: 1 addition & 0 deletions build-a-python-app-with-cockroachdb-sqlalchemy.md
Expand Up @@ -95,3 +95,4 @@ You might also be interested in using a local cluster to explore the following c
- [Data Replication](demo-data-replication.html)
- [Fault Tolerance & Recovery](demo-fault-tolerance-and-recovery.html)
- [Automatic Rebalancing](demo-automatic-rebalancing.html)
- [Cloud Migration](demo-cloud-migration.html)
1 change: 1 addition & 0 deletions build-a-python-app-with-cockroachdb.md
Expand Up @@ -116,3 +116,4 @@ You might also be interested in using a local cluster to explore the following c
- [Data Replication](demo-data-replication.html)
- [Fault Tolerance & Recovery](demo-fault-tolerance-and-recovery.html)
- [Automatic Rebalancing](demo-automatic-rebalancing.html)
- [Cloud Migration](demo-cloud-migration.html)
1 change: 1 addition & 0 deletions build-a-ruby-app-with-cockroachdb-activerecord.md
Expand Up @@ -94,3 +94,4 @@ You might also be interested in using a local cluster to explore the following c
- [Data Replication](demo-data-replication.html)
- [Fault Tolerance & Recovery](demo-fault-tolerance-and-recovery.html)
- [Automatic Rebalancing](demo-automatic-rebalancing.html)
- [Cloud Migration](demo-cloud-migration.html)
1 change: 1 addition & 0 deletions build-a-ruby-app-with-cockroachdb.md
Expand Up @@ -99,3 +99,4 @@ You might also be interested in using a local cluster to explore the following c
- [Data Replication](demo-data-replication.html)
- [Fault Tolerance & Recovery](demo-fault-tolerance-and-recovery.html)
- [Automatic Rebalancing](demo-automatic-rebalancing.html)
- [Cloud Migration](demo-cloud-migration.html)
1 change: 1 addition & 0 deletions build-a-rust-app-with-cockroachdb.md
Expand Up @@ -81,3 +81,4 @@ You might also be interested in using a local cluster to explore the following c
- [Data Replication](demo-data-replication.html)
- [Fault Tolerance & Recovery](demo-fault-tolerance-and-recovery.html)
- [Automatic Rebalancing](demo-automatic-rebalancing.html)
- [Cloud Migration](demo-cloud-migration.html)
1 change: 1 addition & 0 deletions demo-automatic-rebalancing.md
Expand Up @@ -175,3 +175,4 @@ Use a local cluster to explore these other core CockroachDB features:

- [Data Replication](demo-data-replication.html)
- [Fault Tolerance & Recovery](demo-fault-tolerance-and-recovery.html)
- [Cloud Migration](demo-cloud-migration.html)
226 changes: 226 additions & 0 deletions demo-cloud-migration.md
@@ -0,0 +1,226 @@
---
title: Cloud Migration
summary: Use a local cluster to simulate migrating from one cloud platform to another.
toc: false
---

CockroachDB's flexible [replication controls](configure-replication-zones.html) make it trivially easy to run a single CockroachDB cluster across cloud platforms or to migrate a cluster from one cloud to another without any service interruption. This page walks you through a local simulation of the process.

<div id="toc"></div>

## Before You Begin

In this tutorial, you'll use CockroachDB, the HAProxy load balancer, and CockroachDB's version of the YCSB load generator, which requires Go. Before you begin, make sure these applications are installed:

- Install the latest version of [CockroachDB](install-cockroachdb.html).
- Install [HAProxy](http://www.haproxy.org/). If you're on a Mac and using Homebrew, use `brew install haproxy`.
- Install [Go](https://golang.org/dl/). If you're on a Mac and using Homebrew, use `brew install go`.
- Install the [CockroachDB version of YCSB](https://github.com/cockroachdb/loadgen/tree/master/ycsb): `go get github.com/cockroachdb/loadgen/ycsb`

Also, to keep track of the data files and logs for your cluster, you may want to create a new directory (e.g., `mkdir cloud-migration`) and start all your nodes in that directory.

## Step 1. Start a 3-node cluster on "cloud 1"

If you've already [started a local cluster](start-a-local-cluster.html), the commands for starting nodes should be familiar to you. The new flag to note is [`--locality`](configure-replication-zones.html#descriptive-attributes-assigned-to-nodes), which accepts key-value pairs that describe the topography of a node. In this case, you're using the flag to specify that the first 3 nodes are running on cloud 1.

In a new terminal, start node 1 on cloud 1:

{% include copy-clipboard.html %}
~~~ shell
$ cockroach start --insecure \
--locality=cloud=1 \
--store=cloud1node1 \
--host=localhost \
--cache=100MB
~~~~

In a new terminal, start node2 on cloud 1:

{% include copy-clipboard.html %}
~~~ shell
$ cockroach start --insecure \
--locality=cloud=1 \
--store=cloud1node2 \
--host=localhost \
--port=25258 \
--http-port=8081 \
--join=localhost:26257 \
--cache=100MB
~~~

In a new terminal, start node 3 on cloud 1:

{% include copy-clipboard.html %}
~~~ shell
$ cockroach start --insecure \
--locality=cloud=1 \
--store=cloud1node3 \
--host=localhost \
--port=25259 \
--http-port=8082 \
--join=localhost:26257 \
--cache=100MB
~~~

## Step 2. Set up HAProxy load balancing

You're now running 3 nodes in a simulated cloud. Each of these nodes is an equally suitable SQL gateway to your cluster, but to ensure an even balancing of client requests across these nodes, you can use a TCP load balancer. Let's use the open-source [HAProxy](http://www.haproxy.org/) load balancer that you installed earlier.

In a new terminal, run the [`cockroach gen haproxy`](generate-cockroachdb-resources.html) command, specifying the port of any node:

{% include copy-clipboard.html %}
~~~ shell
$ cockroach gen haproxy --insecure --host=localhost --port=26257
~~~

This command generates an `haproxy.cfg` file automatically configured to work with the 3 nodes of your running cluster. In the file, change `bind :26257` to `bind :26000`. This changes the port on which HAProxy accepts requests to a port that is not already in use by a node and that won't be used by the nodes you'll add later.

~~~
global
maxconn 4096
defaults
mode tcp
timeout connect 10s
timeout client 1m
timeout server 1m
listen psql
bind :26000
mode tcp
balance roundrobin
server cockroach1 localhost:26257
server cockroach2 localhost:26258
server cockroach3 localhost:26259
~~~

Start HAProxy, with the `-f` flag pointing to the `haproxy.cfg` file:

{% include copy-clipboard.html %}
~~~ shell
$ haproxy -f haproxy.cfg
~~~

## Step 3. Start a load generator

Now that you have a load balancer running in front of your cluster, let's use the YCSB load generator that you installed earlier to simulate multiple client connections, each performing mixed read/write workloads.

In a new terminal, start `ycsb`, pointing it at HAProxy's port:

{% include copy-clipboard.html %}
~~~ shell
$ $HOME/go/bin/ycsb -duration 20m -tolerate-errors -concurrency 10 -rate-limit 100 'postgresql://root@localhost:26000?sslmode=disable'
~~~

This command initiates 10 concurrent client workloads for 20 minutes, but limits each worker to 100 operations per second (since you're running everything on a single machine).

## Step 4. Watch data balance across all 3 nodes

Now open the Admin UI at `http://localhost:8080` and hover over the **SQL Queries** graph at the top. After a minute or so, you'll see that the load generator is executing approximately 95% reads and 5% writes across all nodes:

<img src="images/admin_ui_sql_queries.png" alt="CockroachDB Admin UI" style="border:1px solid #eee;max-width:100%" />

Scroll down a bit and hover over the **Replicas per Node** graph. Because CockroachDB replicates each piece of data 3 times by default, the replica count on each of your 3 nodes should be identical:

<img src="images/admin_ui_replicas_migration.png" alt="CockroachDB Admin UI" style="border:1px solid #eee;max-width:100%" />

## Step 5. Add 3 nodes on "cloud 2"

At this point, you're running three nodes on cloud 1. But what if you'd like to start experimenting with resources provided by another cloud vendor? Let's try that by adding three more nodes to a new cloud platform. Again, the flag to note is [`--locality`](configure-replication-zones.html#descriptive-attributes-assigned-to-nodes), which you're using to specify that these next 3 nodes are running on cloud 2.

In a new terminal, start node 4 on cloud 2:

{% include copy-clipboard.html %}
~~~ shell
$ cockroach start --insecure \
--locality=cloud=2 \
--store=cloud2node4 \
--host=localhost \
--port=26260 \
--http-port=8083 \
--join=localhost:26257 \
--cache=100MB
~~~

In a new terminal, start node 5 on cloud 2:

{% include copy-clipboard.html %}
~~~ shell
$ cockroach start --insecure \
--locality=cloud=2 \
--store=cloud2node5 \
--host=localhost \
--port=25261 \
--http-port=8084 \
--join=localhost:26257 \
--cache=100MB
~~~

In a new terminal, start node 6 on cloud 2:

{% include copy-clipboard.html %}
~~~ shell
$ cockroach start --insecure \
--locality=cloud=2 \
--store=cloud2node6 \
--host=localhost \
--port=25262 \
--http-port=8085 \
--join=localhost:26257 \
--cache=100MB
~~~

## Step 6. Watch data balance across all 6 nodes

Back in the Admin UI, hover over the **Replicas per Node** graph again. Because you used [`--locality`](configure-replication-zones.html#descriptive-attributes-assigned-to-nodes) to specify that nodes are running on 2 clouds, you'll see an approximately even number of replicas on each node, indicating that CockroachDB has automatically rebalanced replicas across both simulated clouds:

<img src="images/admin_ui_replicas_migration2.png" alt="CockroachDB Admin UI" style="border:1px solid #eee;max-width:100%" />

Note that it takes a few minutes for the Admin UI to show accurate per-node replica counts on hover. This is why the new nodes in the screenshot above show 0 replicas. However, the graph lines are accurate, and you can click **View node list** in the **Summary** area for accurate per-node replica counts as well.

## Step 7. Migrate all data to "cloud 2"

So your cluster is replicating across two simulated clouds. But let's say that after experimentation, you're happy with cloud vendor 2, and you decide that you'd like to move everything there. Can you do that without interruption to your live client traffic? Yes, and it's as simple as running a single command to add a [hard constraint](configure-replication-zones.html#replication-constraints) that all replicas must be on nodes with `--locality=cloud=2`.

In a new terminal, edit the default replication zone:

{% include copy-clipboard.html %}
~~~ shell
$ echo 'constraints: [+cloud=2]' | cockroach zone set .default --insecure --host=localhost -f -
~~~

## Step 8. Verify the data migration

Back in the Admin UI, hover over the **Replicas per Node** graph again. Very soon, you'll see the replica count double on nodes 4, 5, and 6 and drop to 0 on nodes 1, 2, and 3:

<img src="images/admin_ui_replicas_migration3.png" alt="CockroachDB Admin UI" style="border:1px solid #eee;max-width:100%" />

This indicates that all data has been migrated from cloud 1 to cloud 2. In a real cloud migration scenario, at this point you would update the load balancer to point to the nodes on cloud 2 and then stop the nodes on cloud 1. But for the purpose of this local simulation, there's no need to do that.

## Step 9. Stop the cluster

Once you're done with your cluster, stop YCSB by switching into its terminal and pressing **CTRL + C**. Then do the same for HAProxy and each CockroachDB node.

{{site.data.alerts.callout_success}}For the last node, the shutdown process will take longer (about a minute) and will eventually force kill the node. This is because, with only 1 node still online, a majority of replicas are no longer available (2 of 3), and so the cluster is not operational. To speed up the process, press <strong>CTRL + C</strong> a second time.{{site.data.alerts.end}}

If you don't plan to restart the cluster, you may want to remove the nodes' data stores and the HAProxy config file:

{% include copy-clipboard.html %}
~~~ shell
$ rm -rf cloud1node1 cloud1node2 cloud1node3 cloud2node4 cloud2node5 cloud2node6 haproxy.cfg
~~~

## What's Next?

Use a local cluster to explore these other core CockroachDB features:

- [Data Replication](demo-data-replication.html)
- [Fault Tolerance & Recovery](demo-fault-tolerance-and-recovery.html)
- [Automatic Rebalancing](demo-automatic-rebalancing.html)

You may also want to learn other ways to control the location and number of replicas in a cluster:

- [Even Replication Across Datacenters](configure-replication-zones.html#even-replication-across-datacenters)
- [Multiple Applications Writing to Different Databases](configure-replication-zones.html#multiple-applications-writing-to-different-databases)
- [Stricter Replication for a Specific Table](configure-replication-zones.html#stricter-replication-for-a-specific-table)
- [Tweaking the Replication of System Ranges](configure-replication-zones.html#tweaking-the-replication-of-system-ranges)
1 change: 1 addition & 0 deletions demo-data-replication.md
Expand Up @@ -204,3 +204,4 @@ Use a local cluster to explore these other core CockroachDB features:

- [Fault Tolerance & Recovery](demo-fault-tolerance-and-recovery.html)
- [Automatic Rebalancing](demo-automatic-rebalancing.html)
- [Cloud Migration](demo-cloud-migration.html)
1 change: 1 addition & 0 deletions demo-fault-tolerance-and-recovery.md
Expand Up @@ -344,3 +344,4 @@ Use a local cluster to explore these other core CockroachDB features:

- [Data Replication](demo-data-replication.html)
- [Automatic Rebalancing](demo-automatic-rebalancing.html)
- [Cloud Migration](demo-cloud-migration.html)
Binary file added images/admin_ui_replicas_migration.png
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added images/admin_ui_replicas_migration2.png
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added images/admin_ui_replicas_migration3.png
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added images/admin_ui_sql_connections_migration.png
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added images/admin_ui_sql_queries.png
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
2 changes: 1 addition & 1 deletion start-a-local-cluster-in-docker.md
Expand Up @@ -194,4 +194,4 @@ Use the `docker stop` and `docker rm` commands to stop and remove the containers
- Learn more about [CockroachDB SQL](learn-cockroachdb-sql.html) and the [built-in SQL client](use-the-built-in-sql-client.html)
- [Install the client driver](install-client-drivers.html) for your preferred language
- [Build an app with CockroachDB](build-an-app-with-cockroachdb.html)
- [Explore core CockroachDB features](demo-data-replication.html) like automatic replication, rebalancing, and fault tolerance
- [Explore core CockroachDB features](demo-data-replication.html) like automatic replication, rebalancing, fault tolerance, and cloud migration
2 changes: 1 addition & 1 deletion start-a-local-cluster.md
Expand Up @@ -255,4 +255,4 @@ $ cockroach start --insecure \
- Learn more about [CockroachDB SQL](learn-cockroachdb-sql.html) and the [built-in SQL client](use-the-built-in-sql-client.html)
- [Install the client driver](install-client-drivers.html) for your preferred language
- [Build an app with CockroachDB](build-an-app-with-cockroachdb.html)
- [Explore core CockroachDB features](demo-data-replication.html) like automatic replication, rebalancing, and fault tolerance
- [Explore core CockroachDB features](demo-data-replication.html) like automatic replication, rebalancing, fault tolerance, and cloud migration.

0 comments on commit 0520d55

Please sign in to comment.