Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions _includes/v19.1/orchestration/kubernetes-scale-cluster.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@ The Kubernetes cluster contains 4 nodes, one master and 3 workers. Pods get plac

1. Add a worker node:
- On GKE, [resize your cluster](https://cloud.google.com/kubernetes-engine/docs/how-to/resizing-a-cluster).
- On EKS, resize your [Worker Node Group](https://docs.aws.amazon.com/eks/latest/userguide/update-stack.html).
- On GCE, resize your [Managed Instance Group](https://cloud.google.com/compute/docs/instance-groups/).
- On AWS, resize your [Auto Scaling Group](https://docs.aws.amazon.com/autoscaling/latest/userguide/as-manual-scaling.html).

Expand Down
13 changes: 10 additions & 3 deletions _includes/v19.1/orchestration/start-cockroachdb-helm-insecure.md
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,14 @@
$ helm init --service-account tiller
~~~

3. Install the CockroachDB Helm chart, providing a "release" name to identify and track this particular deployment of the chart:
3. Update your Helm chart repositories to ensure that you're using the latest CockroachDB chart:

{% include copy-clipboard.html %}
~~~ shell
$ helm repo update
~~~

4. Install the CockroachDB Helm chart, providing a "release" name to identify and track this particular deployment of the chart:

{{site.data.alerts.callout_info}}
This tutorial uses `my-release` as the release name. If you use a different value, be sure to adjust the release name in subsequent commands.
Expand All @@ -64,7 +71,7 @@
You can customize your deployment by passing [configuration parameters](https://github.com/helm/charts/tree/master/stable/cockroachdb#configuration) to `helm install` using the `--set key=value[,key=value]` flag. For a production cluster, you should consider modifying the `Storage` and `StorageClass` parameters. This chart defaults to 100 GiB of disk space per pod, but you may want more or less depending on your use case, and the default persistent volume `StorageClass` in your environment may not be what you want for a database (e.g., on GCE and Azure the default is not SSD).
{{site.data.alerts.end}}

4. Confirm that three pods are `Running` successfully and that the one-time cluster initialization has `Completed`:
5. Confirm that three pods are `Running` successfully and that the one-time cluster initialization has `Completed`:

{% include copy-clipboard.html %}
~~~ shell
Expand All @@ -79,7 +86,7 @@
my-release-cockroachdb-init-k6jcr 0/1 Completed 0 1m
~~~

5. Confirm that the persistent volumes and corresponding claims were created successfully for all three pods:
6. Confirm that the persistent volumes and corresponding claims were created successfully for all three pods:

{% include copy-clipboard.html %}
~~~ shell
Expand Down
28 changes: 27 additions & 1 deletion _includes/v19.1/orchestration/start-kubernetes.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
Choose whether you want to orchestrate CockroachDB with Kubernetes using the hosted Google Kubernetes Engine (GKE) service or manually on Google Compute Engine (GCE) or AWS. The instructions below will change slightly depending on your choice.
Choose whether you want to orchestrate CockroachDB with Kubernetes using the hosted Google Kubernetes Engine (GKE) service, the hosted Amazon Elastic Kubernetes Service (EKS), or manually on Google Compute Engine (GCE) or AWS. The instructions below will change slightly depending on your choice.

- [Hosted GKE](#hosted-gke)
- [Hosted EKS](#hosted-eks)
- [Manual GCE](#manual-gce)
- [Manual AWS](#manual-aws)

Expand Down Expand Up @@ -55,6 +56,31 @@ Choose whether you want to orchestrate CockroachDB with Kubernetes using the hos
clusterrolebinding "cluster-admin-binding" created
~~~

### Hosted EKS

1. Complete the steps described in the [EKS Getting Started](https://docs.aws.amazon.com/eks/latest/userguide/getting-started-eksctl.html) documentation.

This includes installing and configuring the AWS CLI and `eksctl`, which is the command-line tool used to create and delete Kubernetes clusters on EKS, and `kubectl`, which is the command-line tool used to manage Kubernetes from your workstation.

2. From your local workstation, start the Kubernetes cluster:

{% include copy-clipboard.html %}
~~~ shell
$ eksctl create cluster \
--name cockroachdb \
--version 1.13 \
--nodegroup-name standard-workers \
--node-type m5.xlarge \
--nodes 3 \
--nodes-min 1 \
--nodes-max 4 \
--node-ami auto
~~~

This creates EKS instances and joins them into a single Kubernetes cluster named `cockroachdb`. The `--node-type` flag tells the node pool to use the [`m5.xlarge`](https://aws.amazon.com/ec2/instance-types/) instance type (4 vCPUs, 16 GB memory), which meets our [recommended CPU and memory configuration](recommended-production-settings.html#basic-hardware-recommendations).

Cluster provisioning usually takes between 10 and 15 minutes. Do not move on to the next step until you see a message like `[✔] EKS cluster "cockroachdb" in "us-east-1" region is ready` and details about your cluster.

### Manual GCE

From your local workstation, install prerequisites and start a Kubernetes cluster as described in the [Running Kubernetes on Google Compute Engine](https://kubernetes.io/docs/setup/turnkey/gce/) documentation.
Expand Down
1 change: 1 addition & 0 deletions _includes/v19.2/orchestration/kubernetes-scale-cluster.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@ The Kubernetes cluster contains 4 nodes, one master and 3 workers. Pods get plac

1. Add a worker node:
- On GKE, [resize your cluster](https://cloud.google.com/kubernetes-engine/docs/how-to/resizing-a-cluster).
- On EKS, resize your [Worker Node Group](https://docs.aws.amazon.com/eks/latest/userguide/update-stack.html).
- On GCE, resize your [Managed Instance Group](https://cloud.google.com/compute/docs/instance-groups/).
- On AWS, resize your [Auto Scaling Group](https://docs.aws.amazon.com/autoscaling/latest/userguide/as-manual-scaling.html).

Expand Down
13 changes: 10 additions & 3 deletions _includes/v19.2/orchestration/start-cockroachdb-helm-insecure.md
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,14 @@
$ helm init --service-account tiller
~~~

3. Install the CockroachDB Helm chart, providing a "release" name to identify and track this particular deployment of the chart:
3. Update your Helm chart repositories to ensure that you're using the latest CockroachDB chart:

{% include copy-clipboard.html %}
~~~ shell
$ helm repo update
~~~

4. Install the CockroachDB Helm chart, providing a "release" name to identify and track this particular deployment of the chart:

{{site.data.alerts.callout_info}}
This tutorial uses `my-release` as the release name. If you use a different value, be sure to adjust the release name in subsequent commands.
Expand All @@ -64,7 +71,7 @@
You can customize your deployment by passing [configuration parameters](https://github.com/helm/charts/tree/master/stable/cockroachdb#configuration) to `helm install` using the `--set key=value[,key=value]` flag. For a production cluster, you should consider modifying the `Storage` and `StorageClass` parameters. This chart defaults to 100 GiB of disk space per pod, but you may want more or less depending on your use case, and the default persistent volume `StorageClass` in your environment may not be what you want for a database (e.g., on GCE and Azure the default is not SSD).
{{site.data.alerts.end}}

4. Confirm that three pods are `Running` successfully and that the one-time cluster initialization has `Completed`:
5. Confirm that three pods are `Running` successfully and that the one-time cluster initialization has `Completed`:

{% include copy-clipboard.html %}
~~~ shell
Expand All @@ -79,7 +86,7 @@
my-release-cockroachdb-init-k6jcr 0/1 Completed 0 1m
~~~

5. Confirm that the persistent volumes and corresponding claims were created successfully for all three pods:
6. Confirm that the persistent volumes and corresponding claims were created successfully for all three pods:

{% include copy-clipboard.html %}
~~~ shell
Expand Down
28 changes: 27 additions & 1 deletion _includes/v19.2/orchestration/start-kubernetes.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
Choose whether you want to orchestrate CockroachDB with Kubernetes using the hosted Google Kubernetes Engine (GKE) service or manually on Google Compute Engine (GCE) or AWS. The instructions below will change slightly depending on your choice.
Choose whether you want to orchestrate CockroachDB with Kubernetes using the hosted Google Kubernetes Engine (GKE) service, the hosted Amazon Elastic Kubernetes Service (EKS), or manually on Google Compute Engine (GCE) or AWS. The instructions below will change slightly depending on your choice.

- [Hosted GKE](#hosted-gke)
- [Hosted EKS](#hosted-eks)
- [Manual GCE](#manual-gce)
- [Manual AWS](#manual-aws)

Expand Down Expand Up @@ -55,6 +56,31 @@ Choose whether you want to orchestrate CockroachDB with Kubernetes using the hos
clusterrolebinding "cluster-admin-binding" created
~~~

### Hosted EKS

1. Complete the steps described in the [EKS Getting Started](https://docs.aws.amazon.com/eks/latest/userguide/getting-started-eksctl.html) documentation.

This includes installing and configuring the AWS CLI and `eksctl`, which is the command-line tool used to create and delete Kubernetes clusters on EKS, and `kubectl`, which is the command-line tool used to manage Kubernetes from your workstation.

2. From your local workstation, start the Kubernetes cluster:

{% include copy-clipboard.html %}
~~~ shell
$ eksctl create cluster \
--name cockroachdb \
--version 1.13 \
--nodegroup-name standard-workers \
--node-type m5.xlarge \
--nodes 3 \
--nodes-min 1 \
--nodes-max 4 \
--node-ami auto
~~~

This creates EKS instances and joins them into a single Kubernetes cluster named `cockroachdb`. The `--node-type` flag tells the node pool to use the [`m5.xlarge`](https://aws.amazon.com/ec2/instance-types/) instance type (4 vCPUs, 16 GB memory), which meets our [recommended CPU and memory configuration](recommended-production-settings.html#basic-hardware-recommendations).

Cluster provisioning usually takes between 10 and 15 minutes. Do not move on to the next step until you see a message like `[✔] EKS cluster "cockroachdb" in "us-east-1" region is ready` and details about your cluster.

### Manual GCE

From your local workstation, install prerequisites and start a Kubernetes cluster as described in the [Running Kubernetes on Google Compute Engine](https://kubernetes.io/docs/setup/turnkey/gce/) documentation.
Expand Down
29 changes: 8 additions & 21 deletions v19.1/orchestrate-cockroachdb-with-kubernetes-insecure.md
Original file line number Diff line number Diff line change
Expand Up @@ -141,30 +141,11 @@ To shut down the CockroachDB cluster:
<section class="filter-content" markdown="1" data-scope="helm">
{% include copy-clipboard.html %}
~~~ shell
$ kubectl delete pods,statefulsets,services,persistentvolumeclaims,persistentvolumes,poddisruptionbudget,jobs,rolebinding,clusterrolebinding,role,clusterrole,serviceaccount,alertmanager,prometheus,prometheusrule,serviceMonitor -l app=my-release-cockroachdb
$ helm delete my-release --purge
~~~

~~~
pod "my-release-cockroachdb-0" deleted
pod "my-release-cockroachdb-1" deleted
pod "my-release-cockroachdb-2" deleted
pod "my-release-cockroachdb-3" deleted
service "alertmanager-cockroachdb" deleted
service "my-release-cockroachdb" deleted
service "my-release-cockroachdb-public" deleted
persistentvolumeclaim "datadir-my-release-cockroachdb-0" deleted
persistentvolumeclaim "datadir-my-release-cockroachdb-1" deleted
persistentvolumeclaim "datadir-my-release-cockroachdb-2" deleted
persistentvolumeclaim "datadir-my-release-cockroachdb-3" deleted
poddisruptionbudget "cockroachdb-budget" deleted
job "cluster-init" deleted
clusterrolebinding "prometheus" deleted
clusterrole "prometheus" deleted
serviceaccount "prometheus" deleted
alertmanager "cockroachdb" deleted
prometheus "cockroachdb" deleted
prometheusrule "prometheus-cockroachdb-rules" deleted
servicemonitor "cockroachdb" deleted
release "my-release" deleted
~~~
</section>

Expand All @@ -175,6 +156,12 @@ To shut down the CockroachDB cluster:
~~~ shell
$ gcloud container clusters delete cockroachdb
~~~
- Hosted EKS:

{% include copy-clipboard.html %}
~~~ shell
$ eksctl delete cluster --name cockroachdb
~~~
- Manual GCE:

{% include copy-clipboard.html %}
Expand Down
29 changes: 8 additions & 21 deletions v19.2/orchestrate-cockroachdb-with-kubernetes-insecure.md
Original file line number Diff line number Diff line change
Expand Up @@ -140,30 +140,11 @@ To shut down the CockroachDB cluster:
<section class="filter-content" markdown="1" data-scope="helm">
{% include copy-clipboard.html %}
~~~ shell
$ kubectl delete pods,statefulsets,services,persistentvolumeclaims,persistentvolumes,poddisruptionbudget,jobs,rolebinding,clusterrolebinding,role,clusterrole,serviceaccount,alertmanager,prometheus,prometheusrule,serviceMonitor -l app=my-release-cockroachdb
$ helm delete my-release --purge
~~~

~~~
pod "my-release-cockroachdb-0" deleted
pod "my-release-cockroachdb-1" deleted
pod "my-release-cockroachdb-2" deleted
pod "my-release-cockroachdb-3" deleted
service "alertmanager-cockroachdb" deleted
service "my-release-cockroachdb" deleted
service "my-release-cockroachdb-public" deleted
persistentvolumeclaim "datadir-my-release-cockroachdb-0" deleted
persistentvolumeclaim "datadir-my-release-cockroachdb-1" deleted
persistentvolumeclaim "datadir-my-release-cockroachdb-2" deleted
persistentvolumeclaim "datadir-my-release-cockroachdb-3" deleted
poddisruptionbudget "cockroachdb-budget" deleted
job "cluster-init" deleted
clusterrolebinding "prometheus" deleted
clusterrole "prometheus" deleted
serviceaccount "prometheus" deleted
alertmanager "cockroachdb" deleted
prometheus "cockroachdb" deleted
prometheusrule "prometheus-cockroachdb-rules" deleted
servicemonitor "cockroachdb" deleted
release "my-release" deleted
~~~
</section>

Expand All @@ -174,6 +155,12 @@ To shut down the CockroachDB cluster:
~~~ shell
$ gcloud container clusters delete cockroachdb
~~~
- Hosted EKS:

{% include copy-clipboard.html %}
~~~ shell
$ eksctl delete cluster --name cockroachdb
~~~
- Manual GCE:

{% include copy-clipboard.html %}
Expand Down