Skip to content
This repository has been archived by the owner on Mar 28, 2020. It is now read-only.

Latest commit

 

History

History
111 lines (82 loc) · 4.32 KB

cluster_tls.md

File metadata and controls

111 lines (82 loc) · 4.32 KB

Cluster TLS guide

Cluster TLS policy is configured on a per-cluster basis via the TPR spec provided to etcd-operator. For etcd's TLS support and requirements, see etcd security guide. To learn about generating self-signed TLS certs, see this tutorial.

Static cluster TLS Policy

Static TLS means keys/certs are generated by user and passed to operator.

Let's use the following example and walk through the spec:

apiVersion: "etcd.coreos.com/v1beta1"
kind: "Cluster"
metadata:
  name: example
  namespace: default
spec:
  ...
  TLS:
    static:
      member:
        peerSecret: etcd-peer-tls
        serverSecret: etcd-server-tls
      operatorSecret: etcd-client-tls

The example cluster YAML manifest and example certs can be found in example/tls/ directory.

member.peerSecret

member.peerSecret contains pem-encoded private keys and x509 certificates for etcd peer communication.

The peer TLS assets should have the following:

  • peer.crt: peer communication cert. The certificate should allow wildcard domain *.${clusterName}.${namespace}.svc.cluster.local. In this case, it is *.example.default.svc.cluster.local.
  • peer.key: peer communication key.
  • peer-ca.crt: CA cert for this peer key-cert pair.

Create a secret containing those:

$ kubectl create secret generic etcd-peer-tls --from-file=peer-ca.crt --from-file=peer.crt --from-file=peer.key

Once passed, etcd-operator will mount this secret at /etc/etcdtls/member/peer-tls/ for each etcd member pod in the cluster.

member.serverSecret

member.serverSecret contains pem-encoded private keys and x509 certificates for etcd client communication on server side.

The client TLS assets should have the following:

  • server.crt: etcd server's client communication cert. The certificate should allow wildcard domain *.${clusterName}.${namespace}.svc.cluster.local, ${clusterName}-client.${namespace}.svc.cluster.local, localhost. In this case, it is *.example.default.svc.cluster.local, example-client.default.svc.cluster.local, and localhost.
  • server.key: etcd server's client communication key.
  • server-ca.crt: CA cert for validating the certs of etcd clients.

Create a secret containing those:

$ kubectl create secret generic etcd-server-tls --from-file=server-ca.crt --from-file=server.crt --from-file=server.key

etcd-operator will mount this secret at /etc/etcdtls/member/server-tls/ for each etcd member pod in the cluster.

operatorSecret

Operator needs to send client requests e.g. snapshot, healthy check, add/remove member in order to maintain this cluster. operatorSecret contains pem-encoded private keys and x509 certificates for communicating with etcd server via client URL.

The operator's etcd TLS assets should have the following:

  • etcd-client.crt: operator's etcd x509 client cert.
  • etcd-client.key: operator's etcd x509 client key.
  • etcd-client-ca.crt: CA cert for validating the certs of etcd members. They corresponds to the --cert,--key, and --cacert arguments of etcdctl.

Create a secret containing those:

$ kubectl create secret generic etcd-client-tls --from-file=etcd-client-ca.crt --from-file=etcd-client.crt --from-file=etcd-client.key

Pass etcd-client-tls to operatorSecret field.

Access a secure etcd cluster

Assume a secure etcd cluster example is up and running.

To access the cluster, use the FQDN example-client.default.svc.cluster.local, which matches the SAN of its certificates. To add more DNS name or IP, that should be added when generating etcd server's client certs.

Assume the following certs are being used:

etcd-client.crt
etcd-client.key
etcd-client-ca.crt

Both etcd-client.crt and etcd-client.key should trusted by etcd server's client CA server-ca.crt. etcd-client-ca.crt should trust etcd server's key and cert.

Here is an example etcdctl command to list members from the secure etcd cluster:

$ ETCDCTL_API=3 etcdctl --endpoints=https://example-client.default.svc.cluster.local:2379 \
    --cert=etcd-client.crt --key=etcd-client.key --cacert=etcd-client-ca.crt \
    member list -w table

It should be run within a pod inside k8s in order to access the service name.