Skip to content

Commit

Permalink
Update based on feedback
Browse files Browse the repository at this point in the history
  • Loading branch information
lnhsingh committed Jan 3, 2019
1 parent 0946396 commit 0104b3b
Show file tree
Hide file tree
Showing 4 changed files with 18 additions and 100 deletions.
57 changes: 8 additions & 49 deletions v2.1/training/geo-partitioning.md
Expand Up @@ -189,9 +189,9 @@ For added clarity, here's a key showing how nodes map to localities:

Node IDs | Locality
---------|---------
1 - 3 | `--locality=region=us-east=datacenter=us-east1`
4 - 6 | `--locality=region=us-west=datacenter=us-west1`
7 - 9 | `--locality=region=us-west=datacenter=us-west2`
1 - 3 | `--locality=region=us-east,datacenter=us-east1`
4 - 6 | `--locality=region=us-west,datacenter=us-west1`
7 - 9 | `--locality=region=us-west,datacenter=us-west2`

In this case, for the single range containing `vehicles` data, one replica is in each datacenter, and the leaseholder is in the `us-west1` datacenter. The same is true for the single range containing `users` data, but the leaseholder is in the `us-west2` datacenter.

Expand Down Expand Up @@ -245,6 +245,10 @@ For the single range containing `users` data, one replica is in each datacenter,

For this service, the most effective technique for improving read and write latency is to geo-partition the data by city. In essence, this means changing the way data is mapped to ranges. Instead of an entire table and its indexes mapping to a specific range or set of ranges, all rows in the table and its indexes with a given city will map to a range or set of ranges.

{{site.data.alerts.callout_info}}
The following steps partition each table by city to demonstrate the feature. In production, it is recommended that you use a `region` constraint on the database instead.
{{site.data.alerts.end}}

1. Partition the `users` table by city:

{% include copy-clipboard.html %}
Expand Down Expand Up @@ -317,34 +321,19 @@ Since our nodes are located in 3 specific datacenters, we're only going to use t
~~~ sql
> ALTER PARTITION new_york OF TABLE movr.users \
CONFIGURE ZONE USING constraints='[+datacenter=us-east1]';
~~~

{% include copy-clipboard.html %}
~~~ sql
> ALTER PARTITION boston OF TABLE movr.users \
CONFIGURE ZONE USING constraints='[+datacenter=us-east1]';
~~~

{% include copy-clipboard.html %}
~~~ sql
> ALTER PARTITION washington_dc OF TABLE movr.users \
CONFIGURE ZONE USING constraints='[+datacenter=us-east1]';
~~~

{% include copy-clipboard.html %}
~~~ sql
> ALTER PARTITION seattle OF TABLE movr.users \
CONFIGURE ZONE USING constraints='[+datacenter=us-west1]';
~~~

{% include copy-clipboard.html %}
~~~ sql
> ALTER PARTITION san_francisco OF TABLE movr.users \
CONFIGURE ZONE USING constraints='[+datacenter=us-west2]';
~~~

{% include copy-clipboard.html %}
~~~ sql
> ALTER PARTITION los_angeles OF TABLE movr.users \
CONFIGURE ZONE USING constraints='[+datacenter=us-west2]';
~~~
Expand All @@ -355,34 +344,19 @@ Since our nodes are located in 3 specific datacenters, we're only going to use t
~~~ sql
> ALTER PARTITION new_york OF TABLE movr.vehicles \
CONFIGURE ZONE USING constraints='[+datacenter=us-east1]';
~~~

{% include copy-clipboard.html %}
~~~ sql
> ALTER PARTITION boston OF TABLE movr.vehicles \
CONFIGURE ZONE USING constraints='[+datacenter=us-east1]';
~~~

{% include copy-clipboard.html %}
~~~ sql
> ALTER PARTITION washington_dc OF TABLE movr.vehicles \
CONFIGURE ZONE USING constraints='[+datacenter=us-east1]';
~~~

{% include copy-clipboard.html %}
~~~ sql
> ALTER PARTITION seattle OF TABLE movr.vehicles \
CONFIGURE ZONE USING constraints='[+datacenter=us-west1]';
~~~

{% include copy-clipboard.html %}
~~~ sql
> ALTER PARTITION san_francisco OF TABLE movr.vehicles \
CONFIGURE ZONE USING constraints='[+datacenter=us-west2]';
~~~

{% include copy-clipboard.html %}
~~~ sql
> ALTER PARTITION los_angeles OF TABLE movr.vehicles \
CONFIGURE ZONE USING constraints='[+datacenter=us-west2]';
~~~
Expand All @@ -393,34 +367,19 @@ Since our nodes are located in 3 specific datacenters, we're only going to use t
~~~ sql
> ALTER PARTITION new_york OF TABLE movr.rides \
CONFIGURE ZONE USING constraints='[+datacenter=us-east1]';
~~~

{% include copy-clipboard.html %}
~~~ sql
> ALTER PARTITION boston OF TABLE movr.rides \
CONFIGURE ZONE USING constraints='[+datacenter=us-east1]';
~~~

{% include copy-clipboard.html %}
~~~ sql
> ALTER PARTITION washington_dc OF TABLE movr.rides \
CONFIGURE ZONE USING constraints='[+datacenter=us-east1]';
~~~

{% include copy-clipboard.html %}
~~~ sql
> ALTER PARTITION seattle OF TABLE movr.rides \
CONFIGURE ZONE USING constraints='[+datacenter=us-west1]';
~~~

{% include copy-clipboard.html %}
~~~ sql
> ALTER PARTITION san_francisco OF TABLE movr.rides \
CONFIGURE ZONE USING constraints='[+datacenter=us-west2]';
~~~

{% include copy-clipboard.html %}
~~~ sql
> ALTER PARTITION los_angeles OF TABLE movr.rides \
CONFIGURE ZONE USING constraints='[+datacenter=us-west2]';
~~~
Expand Down Expand Up @@ -539,7 +498,7 @@ In the next module, you'll start with a fresh cluster, so take a moment to clean
$ pkill -9 cockroach
~~~

This simplified shutdown process is only appropriate for a lab/evaluation scenario. In a production environment, you would use `cockroach quit` to gracefully shut down each node.
This simplified shutdown process is only appropriate for a lab/evaluation scenario.

3. Remove the nodes' data directories:

Expand Down
2 changes: 1 addition & 1 deletion v2.1/training/orchestration-with-kubernetes.md
Expand Up @@ -17,7 +17,7 @@ Feature | Description
--------|------------
[minikube](http://kubernetes.io/docs/getting-started-guides/minikube/) | This is the tool you'll use to run a Kubernetes cluster inside a VM on your local workstation.
[pod](http://kubernetes.io/docs/user-guide/pods/) | A pod is a group of one of more Docker containers. In this tutorial, all pods will run on your local workstation, each containing one Docker container running a single CockroachDB node. You'll start with 3 pods and grow to 4.
[StatefulSet](http://kubernetes.io/docs/concepts/abstractions/controllers/statefulsets/) | A StatefulSet is a group of pods treated as stateful units, where each pod has distinguishable network identity and always binds back to the same persistent storage on restart. StatefulSets are considered stable as of Kubernetes version 1.9 after reaching beta in version 1.5.
[StatefulSet](http://kubernetes.io/docs/concepts/abstractions/controllers/statefulsets/) | A StatefulSet is a group of pods treated as stateful units, where each pod has distinguishable network identity and always binds back to the same persistent storage on restart. StatefulSets require Kubernetes version 1.9 or newer.
[persistent volume](http://kubernetes.io/docs/user-guide/persistent-volumes/) | A persistent volume is a piece of storage mounted into a pod. The lifetime of a persistent volume is decoupled from the lifetime of the pod that's using it, ensuring that each CockroachDB node binds back to the same storage on restart.<br><br>When using `minikube`, persistent volumes are external temporary directories that endure until they are manually deleted or until the entire Kubernetes cluster is deleted.
[persistent volume claim](http://kubernetes.io/docs/user-guide/persistent-volumes/#persistentvolumeclaims) | When pods are created (one per CockroachDB node), each pod will request a persistent volume claim to “claim” durable storage for its node.

Expand Down
57 changes: 8 additions & 49 deletions v2.2/training/geo-partitioning.md
Expand Up @@ -189,9 +189,9 @@ For added clarity, here's a key showing how nodes map to localities:

Node IDs | Locality
---------|---------
1 - 3 | `--locality=region=us-east=datacenter=us-east1`
4 - 6 | `--locality=region=us-west=datacenter=us-west1`
7 - 9 | `--locality=region=us-west=datacenter=us-west2`
1 - 3 | `--locality=region=us-east,datacenter=us-east1`
4 - 6 | `--locality=region=us-west,datacenter=us-west1`
7 - 9 | `--locality=region=us-west,datacenter=us-west2`

In this case, for the single range containing `vehicles` data, one replica is in each datacenter, and the leaseholder is in the `us-west1` datacenter. The same is true for the single range containing `users` data, but the leaseholder is in the `us-west2` datacenter.

Expand Down Expand Up @@ -245,6 +245,10 @@ For the single range containing `users` data, one replica is in each datacenter,

For this service, the most effective technique for improving read and write latency is to geo-partition the data by city. In essence, this means changing the way data is mapped to ranges. Instead of an entire table and its indexes mapping to a specific range or set of ranges, all rows in the table and its indexes with a given city will map to a range or set of ranges.

{{site.data.alerts.callout_info}}
The following steps partition each table by city to demonstrate the feature. In production, it is recommended that you use a `region` constraint on the database instead.
{{site.data.alerts.end}}

1. Partition the `users` table by city:

{% include copy-clipboard.html %}
Expand Down Expand Up @@ -317,34 +321,19 @@ Since our nodes are located in 3 specific datacenters, we're only going to use t
~~~ sql
> ALTER PARTITION new_york OF TABLE movr.users \
CONFIGURE ZONE USING constraints='[+datacenter=us-east1]';
~~~

{% include copy-clipboard.html %}
~~~ sql
> ALTER PARTITION boston OF TABLE movr.users \
CONFIGURE ZONE USING constraints='[+datacenter=us-east1]';
~~~

{% include copy-clipboard.html %}
~~~ sql
> ALTER PARTITION washington_dc OF TABLE movr.users \
CONFIGURE ZONE USING constraints='[+datacenter=us-east1]';
~~~

{% include copy-clipboard.html %}
~~~ sql
> ALTER PARTITION seattle OF TABLE movr.users \
CONFIGURE ZONE USING constraints='[+datacenter=us-west1]';
~~~

{% include copy-clipboard.html %}
~~~ sql
> ALTER PARTITION san_francisco OF TABLE movr.users \
CONFIGURE ZONE USING constraints='[+datacenter=us-west2]';
~~~

{% include copy-clipboard.html %}
~~~ sql
> ALTER PARTITION los_angeles OF TABLE movr.users \
CONFIGURE ZONE USING constraints='[+datacenter=us-west2]';
~~~
Expand All @@ -355,34 +344,19 @@ Since our nodes are located in 3 specific datacenters, we're only going to use t
~~~ sql
> ALTER PARTITION new_york OF TABLE movr.vehicles \
CONFIGURE ZONE USING constraints='[+datacenter=us-east1]';
~~~

{% include copy-clipboard.html %}
~~~ sql
> ALTER PARTITION boston OF TABLE movr.vehicles \
CONFIGURE ZONE USING constraints='[+datacenter=us-east1]';
~~~

{% include copy-clipboard.html %}
~~~ sql
> ALTER PARTITION washington_dc OF TABLE movr.vehicles \
CONFIGURE ZONE USING constraints='[+datacenter=us-east1]';
~~~

{% include copy-clipboard.html %}
~~~ sql
> ALTER PARTITION seattle OF TABLE movr.vehicles \
CONFIGURE ZONE USING constraints='[+datacenter=us-west1]';
~~~

{% include copy-clipboard.html %}
~~~ sql
> ALTER PARTITION san_francisco OF TABLE movr.vehicles \
CONFIGURE ZONE USING constraints='[+datacenter=us-west2]';
~~~

{% include copy-clipboard.html %}
~~~ sql
> ALTER PARTITION los_angeles OF TABLE movr.vehicles \
CONFIGURE ZONE USING constraints='[+datacenter=us-west2]';
~~~
Expand All @@ -393,34 +367,19 @@ Since our nodes are located in 3 specific datacenters, we're only going to use t
~~~ sql
> ALTER PARTITION new_york OF TABLE movr.rides \
CONFIGURE ZONE USING constraints='[+datacenter=us-east1]';
~~~

{% include copy-clipboard.html %}
~~~ sql
> ALTER PARTITION boston OF TABLE movr.rides \
CONFIGURE ZONE USING constraints='[+datacenter=us-east1]';
~~~

{% include copy-clipboard.html %}
~~~ sql
> ALTER PARTITION washington_dc OF TABLE movr.rides \
CONFIGURE ZONE USING constraints='[+datacenter=us-east1]';
~~~

{% include copy-clipboard.html %}
~~~ sql
> ALTER PARTITION seattle OF TABLE movr.rides \
CONFIGURE ZONE USING constraints='[+datacenter=us-west1]';
~~~

{% include copy-clipboard.html %}
~~~ sql
> ALTER PARTITION san_francisco OF TABLE movr.rides \
CONFIGURE ZONE USING constraints='[+datacenter=us-west2]';
~~~

{% include copy-clipboard.html %}
~~~ sql
> ALTER PARTITION los_angeles OF TABLE movr.rides \
CONFIGURE ZONE USING constraints='[+datacenter=us-west2]';
~~~
Expand Down Expand Up @@ -539,7 +498,7 @@ In the next module, you'll start with a fresh cluster, so take a moment to clean
$ pkill -9 cockroach
~~~

This simplified shutdown process is only appropriate for a lab/evaluation scenario. In a production environment, you would use `cockroach quit` to gracefully shut down each node.
This simplified shutdown process is only appropriate for a lab/evaluation scenario.

3. Remove the nodes' data directories:

Expand Down
2 changes: 1 addition & 1 deletion v2.2/training/orchestration-with-kubernetes.md
Expand Up @@ -17,7 +17,7 @@ Feature | Description
--------|------------
[minikube](http://kubernetes.io/docs/getting-started-guides/minikube/) | This is the tool you'll use to run a Kubernetes cluster inside a VM on your local workstation.
[pod](http://kubernetes.io/docs/user-guide/pods/) | A pod is a group of one of more Docker containers. In this tutorial, all pods will run on your local workstation, each containing one Docker container running a single CockroachDB node. You'll start with 3 pods and grow to 4.
[StatefulSet](http://kubernetes.io/docs/concepts/abstractions/controllers/statefulsets/) | A StatefulSet is a group of pods treated as stateful units, where each pod has distinguishable network identity and always binds back to the same persistent storage on restart. StatefulSets are considered stable as of Kubernetes version 1.9 after reaching beta in version 1.5.
[StatefulSet](http://kubernetes.io/docs/concepts/abstractions/controllers/statefulsets/) | A StatefulSet is a group of pods treated as stateful units, where each pod has distinguishable network identity and always binds back to the same persistent storage on restart. StatefulSets require Kubernetes version 1.9 or newer.
[persistent volume](http://kubernetes.io/docs/user-guide/persistent-volumes/) | A persistent volume is a piece of storage mounted into a pod. The lifetime of a persistent volume is decoupled from the lifetime of the pod that's using it, ensuring that each CockroachDB node binds back to the same storage on restart.<br><br>When using `minikube`, persistent volumes are external temporary directories that endure until they are manually deleted or until the entire Kubernetes cluster is deleted.
[persistent volume claim](http://kubernetes.io/docs/user-guide/persistent-volumes/#persistentvolumeclaims) | When pods are created (one per CockroachDB node), each pod will request a persistent volume claim to “claim” durable storage for its node.

Expand Down

0 comments on commit 0104b3b

Please sign in to comment.