WARNING: Pebble as an ACME server and this Helm chart is only meant for testing purposes, it is not secure and not meant for production.
Pebble is an ACME server like Let's Encrypt. ACME servers can provide TLS certificates for HTTP over TLS (HTTPS) to ACME clients that are able to prove control over a domain name through an ACME challenge.
This Helm chart makes it easy to install Pebble in a Kubernetes cluster using Helm along with an optional utility server that can act as a configurable DNS server to influence Pebble DNS lookups.
To test interactions against an ACME server like Let's Encrypt from an unreachable CI environment like most ephemeral CI environments are, using Let's Encrypts staging environment likely won't work, at least if you are using the HTTP-01 ACME challenge.
In the commonly used HTTP-01 ACME challenge, an ACME client proves its control of a domain's web server. During this challenge, the ACME server will lookup the domain name's IP and make a web request to it, and that's the problem! In an ephemeral CI environment, it is likely impossible to receive new incoming connections from Let's Encrypt's servers.
helm repo add jupyterhub https://jupyterhub.github.io/helm-chart/
helm repo update
helm upgrade install jupyterhub/pebble
A packaged Helm chart contain the sub-charts Helm templates, and is therefore not recommended for Helm charts being packaged for distribution.
Installing Pebble as part of another chart should likely be made conditional using tags or conditions.
# Chart.yaml - Helm 3 only, see note below for Helm 2 use.
apiVersion: v2
name: my-chart
# ...
dependencies:
- name: pebble
version: 0.1.0
repository: https://jupyterhub.github.io/helm-chart/
tags:
- ci
NOTE: Helm 3 support
Chart.yaml
files withapiVersion: v2
, and there you can specify chart dependencies directly. If you want to remain compatible with Helm 2 yourChart.yaml
file has to haveapiVersion: v1
and the chart dependencies need to be specified in a separaterequirements.yaml
file.
Helm charts render templates into Kubernetes yaml files using configurable values. A Helm chart comes with default values, and these can be overridden during chart installation and upgrades, for example with the --values
flag to pass a YAML file or with the --set
flag.
To configure the Pebble Helm chart, create a my-values.yaml
file to pass with --values
. If you have installed it as a sub-chart, you should nest the configuration.
Pebble is developed to test ACME clients and ensure they are robust, so it can intentionally act mischievous.
The default of this Helm chart seen below configures Pebble to ensure speedy certificate acquisition. Note that if you provide an array to pebble.env
, it will override the default array of environment variables.
pebble:
env:
# ref: https://github.com/letsencrypt/pebble#testing-at-full-speed
- name: PEBBLE_VA_NOSLEEP
value: "1"
See Pebble's documentation for more info about its mischievous behavior.
Pebble will connect with a domain's web-server during HTTP-01 (80) and TLS-ALPN-01 (443) challenges with specific ports, and you can configure those. This is useful if your web-server is behind a Kubernetes service exposing it on port 8080 for example.
pebble:
config:
pebble:
httpPort: 80 # this is the port where outgoing HTTP-01 challenges go
tlsPort: 443 # this is the port where outgoing TLS-ALPN-01 challenges go
Pebble can optionally be deployed with a configurable DNS server next to it that Pebble then will use for DNS lookups. This DNS server can for example be configured to resolve all domain lookups to a specific IP or have CNAME entries to point a domain to another domains, such as directing example.local
to mysvc.mynamespace
.
challtestsrv:
enabled: true
You can make all DNS lookups default to a specific IP. This IP can either be set explicitly like 10.0.13.37
, or you can set it to $(MYSVC_SERVICE_HOST)
which relies on kublet to add and expand the _SERVICE_HOST
suffixed environment variables for Kubernetes Services in the same namespace.
If _SERVICE_HOST
environment variables are used, the Service must exist before the Pebble pod is created.
challtestsrv:
command:
defaultIPv4: 10.0.13.37
# defaultIPv4: $(MYSVC_SERVICE_HOST)
To initialize the DNS server with records, we can use its management REST API and send POST requests to it when it starts up.
Here is an example to add a CNAME record pointing to a Kubernetes Service's domain name.
challtestsrv:
initPostRequests:
- path: set-cname
data:
host: example.local
target: my-acme-client.my-namespace
The ACME client should be configured to work against the Pebble ACME server. The ACME client also needs to explicitly trust a root TLS certificates that has signed the leaf TLS certificate used by Pebble for the ACME communication which will be made over HTTPS.
The ACME client should communicate with something like https://pebbles-service-name.pebbles-namespace:8443/dir
. The namespace part can be omitted if Pebble is in the same namespace as the ACME client, and pebbles-service-name
can be found with kubectl get svc --all-namespaces | grep pebble
.
WARNING: All HTTPS communication should be treated as unsafe HTTP communication! This is only meant for testing!
The ACME client and anything communicating with Pebble's actual ACME Server or management REST API needs to trust this root certificate. Its associated publicly exposed key has signed the leaf certificate Pebble will use the HTTPS communication on port 8443 (Pebble's ACME server) and 8080 (Pebble's REST API with /roots/0
etc.).
The other root certificate is what Pebble use to sign certificates for its ACME clients. Pebble recreate this root certificate on startup and expose it and its associated key through the management REST API on https://pebble:8080/roots/0
without any authorization.
The ACME client will need the root certificate to trust, and be configured to trust it.
A Kubernetes ConfigMap can contain the root certificate to trust, and then be mounted as a file in the ACME client's pod's container.
If the Pebble Helm chart is installed in the ACME client's namespace, we can reuse a ConfigMap from it that contains the root certificate to trust. The ConfigMap's name can be found with kubectl get cm --all-namespaces | grep pebble
.
Otherwise, you can create create a ConfigMap with the root certificate like this.
cat <<EOF | kubectl apply --namespace <namespace-of-acme-client> -f -
apiVersion: v1
kind: ConfigMap
metadata:
name: pebble
data:
root-cert.pem: |
-----BEGIN CERTIFICATE-----
MIIDSzCCAjOgAwIBAgIIOvR7X+wFgKkwDQYJKoZIhvcNAQELBQAwIDEeMBwGA1UE
AxMVbWluaWNhIHJvb3QgY2EgM2FmNDdiMCAXDTIwMDQyNjIzMDYxNloYDzIxMjAw
NDI3MDAwNjE2WjAgMR4wHAYDVQQDExVtaW5pY2Egcm9vdCBjYSAzYWY0N2IwggEi
MA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCaoISLUOImo7vm7sGUpeycouDP
TcJj6CxfCbvBsrlAg8ERGIph9H7TuDnTVk46pOaoxByGlwvvh4qR/Dled+G8NCt5
s0r0yemY/fx1grm1TmcJRO+A1P5kx/M9hy+kVcyLRvPOnvo8Thj/4zvaJDh+pSjt
5oAQvOHt9hYwGkkvSsZw12cTUuCsbypQ4lapDSeAjp3pNlqFcWmCvF9Ib3URDybN
JWhY6yQQe54D2LxYqxCfYZjKhNbaxlNTlHu0Ujy75I/AdSjK6DljAZh0OimuQNEm
FyXWvpnfyHbV5f0mMiXIOo2FY8izSD7cyFagmr0XvymCtxeDK1+MvT2pM+rXAgMB
AAGjgYYwgYMwDgYDVR0PAQH/BAQDAgKEMB0GA1UdJQQWMBQGCCsGAQUFBwMBBggr
BgEFBQcDAjASBgNVHRMBAf8ECDAGAQH/AgEAMB0GA1UdDgQWBBR0MgecNe4RY575
qAtIt6zAjbBqLTAfBgNVHSMEGDAWgBR0MgecNe4RY575qAtIt6zAjbBqLTANBgkq
hkiG9w0BAQsFAAOCAQEAjtGjoXGRG7586vyT3XcJBa8y9MOsDhQGOec23h40NJCn
SPF28bmTIaWhB+Hv8G+Mkyf9Ov3L5L/mH0VGvZUkMAnSdT4vaMYGrTvMtYGS/8ew
lPnlSJ3oO9Kz9zfOneoPDD1OGkV0Oq3wLn9cq6jQgItEeACsXNtaogXJxYhvxiV1
1k/gjXmG9pvFpb0A1bw6apxGftIViDKrPR2P/pG3QAuLKywQiNxZ5odf3kvKdZmJ
hLbu119My9XiiWhNegufcRNRNEnKJ5AQsBEwLEnD4oeIZmFvYVKOPjfWRV5qczVi
mUPjtQv88HhlgX/lBVWJ2VONlFWVoOreZz4GkAm5bA==
-----END CERTIFICATE-----
EOF
# ... within a Pod specification
volumes:
- name: pebble-root-cert
configMap:
name: pebble
## ... if the Pebble chart was installed as a sub-chart.
#name: {{ .Release.Name }}-pebble
containers:
- name: my-container-with-an-acme-client
# ...
volumeMounts:
- name: pebble-root-cert
subPath: root-cert.pem
mountPath: /etc/pebble/root-cert.pem
Configuring the ACME client to trust a certain provided root certificate will depend on the ACME client. But as an example, a popular ACME client in Kubernetes contexts is LEGO. LEGO can be configured to trust a root certificate and its signed leaf certificates if a file path is provided through the LEGO_CA_CERTIFICATES
environment variable.
# ... within a Pod specification template of a Helm chart
containers:
- name: my-container-with-a-lego-acme-client
# ...
env:
- name: LEGO_CA_CERTIFICATES
value: /etc/pebble/root-cert.pem
If you don't need to run test with a specific domain name, you could use the DNS entry of a Kubernetes Service instead. For example, if an ACME client is running in a pod targeted by the Kubernetes service called client-svc
in the namespace client-namespace
, then you could use client-svc
or client-svc.client-namespace
domain names.
A big upside of this approach is that any pod in Kubernetes will be able to find its to the actual web-server using the domain name, and not only those like Pebble using the configurable DNS server.
If you have a local Kubernetes cluster running on your computer or VM and have exposed Kubernetes services through nodePorts, then request you make from the computer or VM will be towards localhost
. But TLS certificates are valid for certain domain names, and the certificates acquired by the ACME client won't be valid for localhost
.
There is a workaround. By adding the lines below to /etc/hosts
, you will make mysvc.mynamespace
and other variants resolve to 127.0.0.1
(localhost).
127.0.0.1 mysvc
127.0.0.1 mysvc.mynamespace
127.0.0.1 mysvc.mynamespace.svc
127.0.0.1 mysvc.mynamespace.svc.cluster.local
It is also possible to configure /etc/hosts
in a CI systems like TravisCI or in a Kubernetes Pod through the spec.hostAlias
configuration.
/etc/resolve.conf
can be configured to make use of a specific DNS server for various domains and its subdomains. Kubernetes Pods can also be configured through the spec.dnsConfig
configuration.
A Kubernetes Service can be used to get a CNAMEs record associated with its own DNS name. For example, a Kubernetes Service named dogs
with the spec.externalName
configuration set to dogs.info
would make dogs
, dogs.mynamespace
, dogs.mynamespace.svc
, and dogs.mynamespace.svc.cluster.local
get a CNAME entry for dogs.info
.
# clone the git repo
git clone https://github.com/jupyterhub/pebble-helm-chart.git
cd pebble-helm-chart
# setup a local k8s cluster
k3d create --wait 60 --publish 8443:32443 --publish 8080:32080 --publish 8053:32053/udp --publish 8053:32053/tcp --publish 8081:32081
export KUBECONFIG="$(k3d get-kubeconfig --name='k3s-default')"
# install pebble
helm upgrade pebble helm-chart/ --install --cleanup-on-fail --set challtestsrv.enabled=true
# run a basic health check
helm test pebble
kubectl logs pebble-test -c acme-mgmt
kubectl logs pebble-test -c dns-mgmt
kubectl logs pebble-test -c dns
TODO