From bc7fbe30a3acf1dd75eacdda945164030903db56 Mon Sep 17 00:00:00 2001 From: James Peach Date: Wed, 16 Oct 2019 17:04:07 +1100 Subject: [PATCH] doc: improve shell command formatting in the guides The Jekyll Markdown processor requires a newline before a block-quote. If the newline isn't present, it renders it as quotes, but continuous in the current paragraph. Update shell command examples in the guides to consistently use `$` as the shell prompt character. Signed-off-by: James Peach --- design/tls-backend-verification.md | 2 +- site/_guides/cert-manager.md | 106 +++++++++++++++-------------- site/_guides/deploy-aws-nlb.md | 2 +- site/_guides/grpc-tls-howto.md | 43 ++++++++---- site/_resources/release-process.md | 14 ++-- site/_resources/troubleshooting.md | 2 +- site/getting-started.md | 4 +- 7 files changed, 94 insertions(+), 79 deletions(-) diff --git a/design/tls-backend-verification.md b/design/tls-backend-verification.md index cb65d7de60e..6f29a27d206 100644 --- a/design/tls-backend-verification.md +++ b/design/tls-backend-verification.md @@ -74,7 +74,7 @@ The secret object should contain one entry named `ca.key`, the constents will be Example: ``` -% kubectl create secret generic my-certificate-authority --from-file=./ca.key +$ kubectl create secret generic my-certificate-authority --from-file=./ca.key ``` Contour already subscribes to Secrets in all namespaces so Secrets will be piped through to the `dag.KubernetsCache` automatically. diff --git a/site/_guides/cert-manager.md b/site/_guides/cert-manager.md index 39277400439..b5ffaf5a0e7 100644 --- a/site/_guides/cert-manager.md +++ b/site/_guides/cert-manager.md @@ -38,7 +38,7 @@ After you've been through the steps the first time, you don't need to repeat dep Run: ``` -kubectl apply -f https://j.hept.io/contour-deployment-rbac +$ kubectl apply -f https://j.hept.io/contour-deployment-rbac ``` to set up Contour as a deployment in its own namespace, `projectcontour`, and tell the cloud provider to provision an external IP that is forwarded to the Contour pods. @@ -46,7 +46,7 @@ to set up Contour as a deployment in its own namespace, `projectcontour`, and te Check the progress of the deployment with this command: ``` -% kubectl -n projectcontour get po +$ kubectl -n projectcontour get po NAME READY STATUS RESTARTS AGE contour-f9f68994f-kzjdz 2/2 Running 0 6d contour-f9f68994f-t7h8n 2/2 Running 0 6d @@ -58,7 +58,7 @@ After all the `contour` pods reach `Running` status, move on to the next step. Retrieve the external address of the load balancer assigned to Contour by your cloud provider: ``` -% kubectl get -n projectcontour service contour -o wide +$ kubectl get -n projectcontour service contour -o wide NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR contour LoadBalancer 10.51.245.99 35.189.26.87 80:30111/TCP,443:30933/TCP 38d app=contour ``` @@ -68,14 +68,14 @@ The value of `EXTERNAL-IP` varies by cloud provider. In this example GKE gives a To make it easier to work with the external load balancer, the tutorial adds a DNS record to a domain we control that points to this load balancer's IP address: ``` -% host gke.davecheney.com +$ host gke.davecheney.com gke.davecheney.com has address 35.189.26.87 ``` On AWS, you specify a `CNAME`, not an `A` record, and it would look something like this: ``` -% host aws.davecheney.com +$ host aws.davecheney.com aws.davecheney.com is an alias for a4d1766f6ce1611e7b27f023b7e83d33–1465548734.ap-southeast-2.elb.amazonaws.com. a4d1766f6ce1611e7b27f023b7e83d33–1465548734.ap-southeast-2.elb.amazonaws.com has address 52.63.20.117 a4d1766f6ce1611e7b27f023b7e83d33–1465548734.ap-southeast-2.elb.amazonaws.com has address 52.64.233.204 @@ -90,13 +90,13 @@ You must deploy at least one Ingress object before Contour can serve traffic. No To deploy KUARD to your cluster, run this command: ``` -kubectl apply -f https://j.hept.io/contour-kuard-example +$ kubectl apply -f https://j.hept.io/contour-kuard-example ``` Check that the pod is running: ``` -% kubectl get po -l app=kuard +$ kubectl get po -l app=kuard NAME READY STATUS RESTARTS AGE kuard-67ff6dd458-sfxkb 1/1 Running 0 19d ``` @@ -108,7 +108,7 @@ Then type the DNS name you set up in the previous step into a web browser, for e You can delete the KUARD service now, or at any time, by running: ``` -kubectl delete -f https://j.hept.io/contour-kuard-example +$ kubectl delete -f https://j.hept.io/contour-kuard-example ``` ## 2. Deploy jetstack/cert-manager @@ -121,13 +121,13 @@ There are plenty of other ways to deploy cert-manager, but they are out of scope To keep things simple, we skip cert-manager's Helm installation, and use the supplied YAML manifests: ``` -kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v0.8.0/cert-manager.yaml +$ kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v0.8.0/cert-manager.yaml ``` When cert-manager is up and running you should see something like: ``` -% kubectl -n cert-manager get all +$ kubectl -n cert-manager get all NAME READY STATUS RESTARTS AGE pod/cert-manager-54f645f7d6-fhpx2 1/1 Running 0 40s pod/cert-manager-cainjector-79b7fc64f-zt97m 1/1 Running 0 40s @@ -149,10 +149,10 @@ replicaset.apps/cert-manager-webhook-6484955794 1 1 1 ### Deploy the Let's Encrypt cluster issuer -cert-manager supports two different CRDs for configuration, an `Issuer`, which is scoped to a single namespace, +cert-manager supports two different CRDs for configuration, an `Issuer`, which is scoped to a single namespace, and a `ClusterIssuer`, which is cluster-wide. -For Contour to be able to serve HTTPS traffic for an Ingress in any namespace, use `ClusterIssuer`. +For Contour to be able to serve HTTPS traffic for an Ingress in any namespace, use `ClusterIssuer`. Create a file called `letsencrypt-staging.yaml` with the following contents: ``` @@ -173,18 +173,20 @@ spec: replacing `user@example.com` with your email address. This is the email address that Let's Encrypt uses to communicate with you about certificates you request. -The staging Let's Encrypt server is not bound by [the API rate limits of the production server][2]. +The staging Let's Encrypt server is not bound by [the API rate limits of the production server][2]. This approach lets you set up and test your environment without worrying about rate limits. You can then repeat this step for a production Let's Encrypt certificate issuer. After you edit and save the file, deploy it: ``` -% kubectl apply -f letsencrypt-staging.yaml +$ kubectl apply -f letsencrypt-staging.yaml clusterissuer "letsencrypt-staging" created ``` -You should see several lines in the output of `kubectl -n cert-manager logs -l app=cert-manager -c cert-manager` informing you that the `ClusterIssuer` is properly registered: +You should see several lines in the output of `kubectl -n cert-manager +logs -l app=cert-manager -c cert-manager` informing you that the +`ClusterIssuer` is properly registered: ``` I0220 02:32:50.614141 1 controller.go:138] clusterissuers controller: syncing item 'letsencrypt-staging' @@ -238,14 +240,14 @@ spec: Deploy to your cluster: ``` -% kubectl apply -f deployment.yaml +$ kubectl apply -f deployment.yaml deployment "httpbin" created -% kubectl get po -l app=httpbin +$ kubectl get po -l app=httpbin NAME READY STATUS RESTARTS AGE httpbin-67fd96d97c-8j2rr 1/1 Running 0 56m ``` -Expose the deployment to the world with a Service. Create a file called `service.yaml` with +Expose the deployment to the world with a Service. Create a file called `service.yaml` with the following contents: ``` @@ -265,9 +267,9 @@ spec: and deploy: ``` -% kubectl apply -f service.yaml +$ kubectl apply -f service.yaml service "httpbin" created -% kubectl get svc httpbin +$ kubectl get svc httpbin NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE httpbin NodePort 10.51.250.182 8080:31205/TCP 57m ``` @@ -295,7 +297,7 @@ This lets requests to `httpbin.davecheney.com` resolve to the external IP addres They are then forwarded to the Contour pods running in the cluster: ``` -% host httpbin.davecheney.com +$ host httpbin.davecheney.com httpbin.davecheney.com is an alias for gke.davecheney.com. gke.davecheney.com has address 35.189.26.87 ``` @@ -303,9 +305,9 @@ gke.davecheney.com has address 35.189.26.87 Change the value of `spec.rules.host` to something that you control, and deploy the Ingress to your cluster: ``` -% kubectl apply -f ingress.yaml +$ kubectl apply -f ingress.yaml ingress "httpbin" created -% kubectl get ing httpbin +$ kubectl get ing httpbin NAME HOSTS ADDRESS PORTS AGE httpbin httpbin.davecheney.com 80 58m ``` @@ -313,18 +315,18 @@ httpbin httpbin.davecheney.com 80 58m Now you can type the host name of the service into a browser, or use curl, to verify it's deployed and everything is working: ``` -% curl http://httpbin.davecheney.com/get +$ curl http://httpbin.davecheney.com/get { - "args": {}, + "args": {}, "headers": { - "Accept": "*/*", - "Content-Length": "0", - "Host": "htpbin.davecheney.com", - "User-Agent": "curl/7.58.0", - "X-Envoy-Expected-Rq-Timeout-Ms": "15000", + "Accept": "*/*", + "Content-Length": "0", + "Host": "htpbin.davecheney.com", + "User-Agent": "curl/7.58.0", + "X-Envoy-Expected-Rq-Timeout-Ms": "15000", "X-Envoy-Internal": "true" - }, - "origin": "10.152.0.2", + }, + "origin": "10.152.0.2", "url": "http://httpbin.davecheney.com/get" } ``` @@ -347,7 +349,7 @@ metadata: spec: tls: - secretName: httpbin - hosts: + hosts: - httpbin.davecheney.com rules: - host: httpbin.davecheney.com @@ -365,7 +367,7 @@ Behind the scenes, cert-manager creates a certificate CRD to manage the lifecycl You can watch the progress of the certificate as it's issued: ``` -% kubectl describe certificate httpbin | tail -n 6 +$ kubectl describe certificate httpbin | tail -n 6 Normal PresentChallenge 1m cert-manager-controller Presenting http-01 challenge for domain httpbin.davecheney.com Normal SelfCheck 1m cert-manager-controller Performing self-check for domain httpbin.davecheney.com Normal ObtainAuthorization 1m cert-manager-controller Obtained authorization for domain httpbin.davecheney.com @@ -377,7 +379,7 @@ You can watch the progress of the certificate as it's issued: Wait for the certificate to be issued: ``` -% kubectl describe certificate httpbin | grep -C3 CertIssued +$ kubectl describe certificate httpbin | grep -C3 CertIssued Conditions: Last Transition Time: 2018-02-26T01:26:30Z Message: Certificate issued successfully @@ -389,7 +391,7 @@ Wait for the certificate to be issued: A `kubernetes.io/tls` secret is created with the `secretName` specified in the `tls:` field of the Ingress. ``` -% kubectl get secret httpbin +$ kubectl get secret httpbin NAME TYPE DATA AGE httpbin kubernetes.io/tls 2 3m ``` @@ -401,7 +403,7 @@ This is because the certificate was issued by the Let's Encrypt staging servers This is so you can't accidentally use the staging servers to serve real certificates. ``` -% curl https://httpbin.davecheney.com/get +$ curl https://httpbin.davecheney.com/get curl: (60) SSL certificate problem: unable to get local issuer certificate More details here: https://curl.haxx.se/docs/sslcerts.html @@ -436,7 +438,7 @@ again replacing user@example.com with your email address. Deploy: ``` -% kubectl apply -f letsencrypt-prod.yaml +$ kubectl apply -f letsencrypt-prod.yaml clusterissuer "letsencrypt-prod" created ``` @@ -457,9 +459,9 @@ Next, delete the existing certificate CRD and the Secret that contains the untru This triggers cert-manager to request the certificate again from the Let's Encrypt production servers. ``` -% kubectl delete certificate httpbin +$ kubectl delete certificate httpbin certificate "httpbin" deleted -% kubectl delete secret httpbin +$ kubectl delete secret httpbin secret "httpbin" deleted ``` @@ -467,18 +469,18 @@ Check that the `httpbin` Secret is recreated, to make sure that the certificate Now revisiting our `https://httpbin.davecheney.com` site should show a valid, trusted, HTTPS certificate. ``` -% curl https://httpbin.davecheney.com/get +$ curl https://httpbin.davecheney.com/get { - "args": {}, + "args": {}, "headers": { - "Accept": "*/*", - "Content-Length": "0", - "Host": "httpbin.davecheney.com", - "User-Agent": "curl/7.58.0", - "X-Envoy-Expected-Rq-Timeout-Ms": "15000", + "Accept": "*/*", + "Content-Length": "0", + "Host": "httpbin.davecheney.com", + "User-Agent": "curl/7.58.0", + "X-Envoy-Expected-Rq-Timeout-Ms": "15000", "X-Envoy-Internal": "true" - }, - "origin": "10.152.0.2", + }, + "origin": "10.152.0.2", "url": "https://httpbin.davecheney.com/get" } ``` @@ -506,7 +508,7 @@ metadata: Now any requests to the insecure HTTP version of your site get an unconditional 301 redirect to the HTTPS version: ``` -% curl -v http://httpbin.davecheney.com/get +$ curl -v http://httpbin.davecheney.com/get * Trying 35.189.26.87… * TCP_NODELAY set * Connected to httpbin.davecheney.com (35.189.26.87) port 80 (#0) @@ -514,13 +516,13 @@ Now any requests to the insecure HTTP version of your site get an unconditional > Host: httpbin.davecheney.com > User-Agent: curl/7.58.0 > Accept: */* -> +> < HTTP/1.1 301 Moved Permanently < location: https://httpbin.davecheney.com/get < date: Tue, 20 Feb 2018 04:11:46 GMT < server: envoy < content-length: 0 -< +< * Connection #0 to host httpbin.davecheney.com left intact ``` diff --git a/site/_guides/deploy-aws-nlb.md b/site/_guides/deploy-aws-nlb.md index b82b18bd29c..cb4a6f972d9 100644 --- a/site/_guides/deploy-aws-nlb.md +++ b/site/_guides/deploy-aws-nlb.md @@ -27,7 +27,7 @@ This creates the `projectcontour` Namespace along with a ServiceAccount, RBAC ru You can get the address of your NLB via: ``` -kubectl get service contour --namespace=projectcontour -o jsonpath='{.status.loadBalancer.ingress[0].hostname}' +$ kubectl get service contour --namespace=projectcontour -o jsonpath='{.status.loadBalancer.ingress[0].hostname}' ``` ## Test diff --git a/site/_guides/grpc-tls-howto.md b/site/_guides/grpc-tls-howto.md index b48b2eec804..891bdf71dd7 100644 --- a/site/_guides/grpc-tls-howto.md +++ b/site/_guides/grpc-tls-howto.md @@ -7,9 +7,10 @@ This document describes the steps required to secure communication between Envoy ## Outcomes The outcome of this is that we will have three Secrets available in the `projectcontour` namespace: -- cacert: contains the CA's public certificate. -- contourcert: contains Contour's keypair, used for serving TLS secured gRPC. This must be a valid certificate for the name `contour` in order for this to work. This is currently hardcoded by Contour. -- envoycert: contains Envoy's keypair, used as a client for connecting to Contour. + +- **cacert:** contains the CA's public certificate. +- **contourcert:** contains Contour's keypair, used for serving TLS secured gRPC. This must be a valid certificate for the name `contour` in order for this to work. This is currently hardcoded by Contour. +- **envoycert:** contains Envoy's keypair, used as a client for connecting to Contour. ### Ways you can get the certificates into your cluster @@ -29,8 +30,9 @@ This is intended as an example to help you get started. For any real deployment, ### Generating a CA keypair First, we need to generate a keypair: + ``` -openssl req -x509 -new -nodes \ +$ openssl req -x509 -new -nodes \ -keyout certs/cakey.pem -sha256 \ -days 1825 -out certs/cacert.pem \ -subj "/O=Project Contour/CN=Contour CA" @@ -41,17 +43,19 @@ Then, the new CA key will be stored in `certs/cakey.pem` and the cert in `certs/ ### Generating Contour's keypair Then, we need to generate a keypair for Contour. First, we make a new private key: + ``` -openssl genrsa -out certs/contourkey.pem 2048 +$ openssl genrsa -out certs/contourkey.pem 2048 ``` Then, we create a CSR and have our CA sign the CSR and issue a cert. This uses the file [_integration/cert-contour.ext]({{ site.github.repository_url }}/tree/master/_integration/cert-contour.ext), which ensures that at least one of the valid names of the certificate is the bareword `contour`. This is required for the handshake to succeed, as `contour bootstrap` configures Envoy to pass this as the SNI for the connection. ``` -openssl req -new -key certs/contourkey.pem \ +$ openssl req -new -key certs/contourkey.pem \ -out certs/contour.csr \ -subj "/O=Project Contour/CN=contour" -openssl x509 -req -in certs/contour.csr \ + +$ openssl x509 -req -in certs/contour.csr \ -CA certs/cacert.pem \ -CAkey certs/cakey.pem \ -CAcreateserial \ @@ -65,16 +69,19 @@ At this point, the contour cert and key are in the files `certs/contourcert.pem` ### Generating Envoy's keypair Next, we generate a keypair for Envoy: + ``` -openssl genrsa -out certs/envoykey.pem 2048 +$ openssl genrsa -out certs/envoykey.pem 2048 ``` Then, we generated a CSR and have the CA sign it: + ``` -openssl req -new -key certs/envoykey.pem \ +$ openssl req -new -key certs/envoykey.pem \ -out certs/envoy.csr \ -subj "/O=Project Contour/CN=envoy" -openssl x509 -req -in certs/envoy.csr \ + +$ openssl x509 -req -in certs/envoy.csr \ -CA certs/cacert.pem \ -CAkey certs/cakey.pem \ -CAcreateserial \ @@ -90,13 +97,19 @@ Like the contour cert, this CSR uses the file [_integration/cert-envoy.ext]({{ s Next, we create the required secrets in the target Kubernetes cluster: ``` -kubectl create secret -n projectcontour generic cacert --from-file=./certs/cacert.pem -kubectl create secret -n projectcontour tls contourcert --key=./certs/contourkey.pem --cert=./certs/contourcert.pem -kubectl create secret -n projectcontour tls envoycert --key=./certs/envoykey.pem --cert=./certs/envoycert.pem +$ kubectl create secret -n projectcontour generic cacert \ + --from-file=./certs/cacert.pem + +$ kubectl create secret -n projectcontour tls contourcert \ + --key=./certs/contourkey.pem --cert=./certs/contourcert.pem + +$ kubectl create secret -n projectcontour tls envoycert \ + --key=./certs/envoykey.pem --cert=./certs/envoycert.pem ``` Note that we don't put the CA **key** into the cluster, there's no reason for that to be there, and that would create a security problem. That also means that the `cacert` secret can't be a `tls` type secret, as they must be a keypair. - # Conclusion +# Conclusion -Once this process is done, the certificates will be present as Secrets in the `projectcontour` namespace, as required by `examples/contour`. +Once this process is done, the certificates will be present as Secrets in the `projectcontour` namespace, as required by +[examples/contour]({{site.github.repository_url}}/tree/master/examples/contour). diff --git a/site/_resources/release-process.md b/site/_resources/release-process.md index 7c3f4ff3ad7..bc634290805 100644 --- a/site/_resources/release-process.md +++ b/site/_resources/release-process.md @@ -20,8 +20,8 @@ The steps for an alpha or beta release are - Tag the head of master with the relevant release tag (in this case `alpha.1`), and push ```sh -% git tag -a v0.15.0-alpha.1 -m 'contour 0.15.0 alpha 1' -% git push --tags +$ git tag -a v0.15.0-alpha.1 -m 'contour 0.15.0 alpha 1' +$ git push --tags ``` Once the tag is present on master, Github Actions will build the tag and push it to Docker Hub for you. @@ -33,7 +33,7 @@ As contours master branch is under active development, rc and final releases are Create a release branch locally, like so ```sh -% git checkout -b release-0.15 +$ git checkout -b release-0.15 ``` If you are doing a patch release on an existing branch, skip this step and just checkout the branch instead. @@ -53,8 +53,8 @@ The Docker tag should be updated from the previous stable release to this new on Tag the head of your release branch with the release tag, and push ```sh -% git tag -a v0.15.0 -m 'contour 0.15.0' -% git push --tags +$ git tag -a v0.15.0 -m 'contour 0.15.0' +$ git push --tags ``` ## Patch release @@ -68,8 +68,8 @@ Get any required changes into the release branch by whatever means you choose. Tag the head of your release branch with the release tag, and push ```sh -% git tag -a v0.15.1 -m 'contour 0.15.1' -% git push --tags +$ git tag -a v0.15.1 -m 'contour 0.15.1' +$ git push --tags ``` ## Updating the `:latest` tag diff --git a/site/_resources/troubleshooting.md b/site/_resources/troubleshooting.md index 8c1bd66c223..e99ca73b6f1 100644 --- a/site/_resources/troubleshooting.md +++ b/site/_resources/troubleshooting.md @@ -14,7 +14,7 @@ Because the HTTP and HTTPS listeners both use the same code, if you have no ingr To test whether Contour is correctly deployed you can deploy the kuard example service: ```sh -% kubectl apply -f https://projectcontour.io/examples/kuard.yaml +$ kubectl apply -f https://projectcontour.io/examples/kuard.yaml ``` ## Access the Envoy admin interface remotely diff --git a/site/getting-started.md b/site/getting-started.md index 14f2fec8486..c69040d0f88 100644 --- a/site/getting-started.md +++ b/site/getting-started.md @@ -20,7 +20,7 @@ Before you start you will need: Run: ```bash -kubectl apply -f {{ site.url }}/quickstart/contour.yaml +$ kubectl apply -f {{ site.url }}/quickstart/contour.yaml ``` This command creates: @@ -40,7 +40,7 @@ If you don't have an application ready to run with Contour, you can explore with Run: ```bash -kubectl apply -f {{ site.url }}/examples/kuard.yaml +$ kubectl apply -f {{ site.url }}/examples/kuard.yaml ``` This example specifies a default backend for all hosts, so that you can test your Contour install. It's recommended for exploration and testing only, however, because it responds to all requests regardless of the incoming DNS that is mapped. You probably want to run with specific Ingress rules for specific hostnames.