Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

draft custom CA steps for K8s deployment #6232

Merged
merged 9 commits into from Jan 10, 2020
Expand Up @@ -67,6 +67,10 @@ If you're on Hosted GKE, before starting, make sure the email address associated

4. Use our [`prometheus.yaml`](https://github.com/cockroachdb/cockroach/blob/master/cloud/kubernetes/prometheus/prometheus.yaml) file to create the various objects necessary to run a Prometheus instance:

{{site.data.alerts.callout_success}}
This configuration defaults to using the Kubernetes CA for authentication.
{{site.data.alerts.end}}

{% include copy-clipboard.html %}
~~~ shell
$ kubectl apply \
Expand Down Expand Up @@ -142,7 +146,7 @@ Active monitoring helps you spot problems early, but it is also essential to sen
~~~

{{site.data.alerts.callout_danger}}
The name of the secret, `alertmanager-cockroachdb`, must match the name used in the `altermanager.yaml` file. If they differ, the Alertmanager instance will start without configuration, and nothing will happen.
The name of the secret, `alertmanager-cockroachdb`, must match the name used in the `alertmanager.yaml` file. If they differ, the Alertmanager instance will start without configuration, and nothing will happen.
{{site.data.alerts.end}}

4. Use our [`alertmanager.yaml`](https://github.com/cockroachdb/cockroach/blob/master/cloud/kubernetes/prometheus/alertmanager.yaml) file to create the various objects necessary to run an Alertmanager instance, including a ClusterIP service so that Prometheus can forward alerts:
Expand Down
4 changes: 4 additions & 0 deletions _includes/v19.2/orchestration/kubernetes-scale-cluster.md
Expand Up @@ -20,6 +20,10 @@ To do this, add a new worker node and then edit your StatefulSet configuration t
~~~
statefulset.apps/cockroachdb scaled
~~~

{{site.data.alerts.callout_success}}
If you aren't using the Kubernetes CA to sign certificates, you can now skip to step 6.
{{site.data.alerts.end}}
</section>

<section class="filter-content" markdown="1" data-scope="helm">
Expand Down
83 changes: 83 additions & 0 deletions _includes/v19.2/orchestration/start-cockroachdb-local-insecure.md
@@ -0,0 +1,83 @@
1. From your local workstation, use our [`cockroachdb-statefulset.yaml`](https://github.com/cockroachdb/cockroach/blob/master/cloud/kubernetes/cockroachdb-statefulset.yaml) file to create the StatefulSet that automatically creates 3 pods, each with a CockroachDB node running inside it:

{% include copy-clipboard.html %}
~~~ shell
$ kubectl create -f https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/cockroachdb-statefulset.yaml
~~~

~~~
service/cockroachdb-public created
service/cockroachdb created
poddisruptionbudget.policy/cockroachdb-budget created
statefulset.apps/cockroachdb created
~~~

2. Confirm that three pods are `Running` successfully. Note that they will not
be considered `Ready` until after the cluster has been initialized:

{% include copy-clipboard.html %}
~~~ shell
$ kubectl get pods
~~~

~~~
NAME READY STATUS RESTARTS AGE
cockroachdb-0 0/1 Running 0 2m
cockroachdb-1 0/1 Running 0 2m
cockroachdb-2 0/1 Running 0 2m
~~~

3. Confirm that the persistent volumes and corresponding claims were created successfully for all three pods:

{% include copy-clipboard.html %}
~~~ shell
$ kubectl get pv
~~~

~~~
NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM REASON AGE
pvc-52f51ecf-8bd5-11e6-a4f4-42010a800002 1Gi RWO Delete Bound default/datadir-cockroachdb-0 26s
pvc-52fd3a39-8bd5-11e6-a4f4-42010a800002 1Gi RWO Delete Bound default/datadir-cockroachdb-1 27s
pvc-5315efda-8bd5-11e6-a4f4-42010a800002 1Gi RWO Delete Bound default/datadir-cockroachdb-2 27s
~~~

4. Use our [`cluster-init.yaml`](https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/cluster-init.yaml) file to perform a one-time initialization that joins the CockroachDB nodes into a single cluster:

{% include copy-clipboard.html %}
~~~ shell
$ kubectl create \
-f https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/cluster-init.yaml
~~~

~~~
job.batch/cluster-init created
~~~

5. Confirm that cluster initialization has completed successfully. The job should be considered successful and the Kubernetes pods should soon be considered `Ready`:

{% include copy-clipboard.html %}
~~~ shell
$ kubectl get job cluster-init
~~~

~~~
NAME COMPLETIONS DURATION AGE
cluster-init 1/1 7s 27s
~~~

{% include copy-clipboard.html %}
~~~ shell
$ kubectl get pods
~~~

~~~
NAME READY STATUS RESTARTS AGE
cluster-init-cqf8l 0/1 Completed 0 56s
cockroachdb-0 1/1 Running 0 7m51s
cockroachdb-1 1/1 Running 0 7m51s
cockroachdb-2 1/1 Running 0 7m51s
~~~

{{site.data.alerts.callout_success}}
The StatefulSet configuration sets all CockroachDB nodes to log to `stderr`, so if you ever need access to a pod/node's logs to troubleshoot, use `kubectl logs <podname>` rather than checking the log on the persistent volume.
{{site.data.alerts.end}}