Skip to content
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions docs/versioned/.nav.yml
Original file line number Diff line number Diff line change
Expand Up @@ -255,6 +255,7 @@ nav:
- Installing plugins:
- Install Istio for Knative: install/installing-istio.md
# TODO: docs for kourier, contour, gateway-api
- Install Contour for Knative: install/installing-contour.md
- Install Kafka for Knative: install/eventing/kafka-install.md
- Install RabbitMQ for Knative: install/eventing/rabbitmq-install.md
# N.B. this duplicates an "eventing" topic above, cross-referenced here.
Expand Down
209 changes: 209 additions & 0 deletions docs/versioned/install/installing-contour.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,209 @@
---
audience: administrator
components:
- serving
function: how-to
---

# Installing Contour for Knative

This page shows how to install Contour in three ways:

- By using Contour’s example YAML.
- By using the Helm chart for Contour.
- By using the Contour gateway provisioner.
Comment on lines +10 to +14
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should focus on installing the Knative adapter for Contour, rather than installing Contour itself. I'd make "Contour installed on the cluster" a pre-requisite, and then just talk about installing the net-contour controller.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See the discussion from Dave -- it sounds like the Contour installation either needs to the Knative-published contour.yaml, or two installations using different ingress class names.


It then shows how to deploy a sample workload and route traffic to it through Contour.

This guidance uses all default settings. No additional configuration is required.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I liked how the Kourier docs provided information about configuration options. It may be that net-contour does not have any separate configuration options, but if so, we should say that the configuration is done natively through Contour configuration, and people should see the guides at https://projectcontour.io/


## Before you begin

This installation requires the following prerequisites:

- A Kubernetes cluster with the Knative Serving component installed.
- Knative [load balancing](../serving/load-balancing/README.md) is activated.
- HELM installed locally, if selected as the installation method.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"Helm" as project is not an initialism, and shouldn't be in all-caps. In any case, this seems like guidance for installing Contour, and we should simply make that a pre-requisite, rather than duplicating Contour's installation instructions.


## Supported Contour versions

For information about Contour versions, see the Contour [Compatibility Matrix](https://projectcontour.io/resources/compatibility-matrix/).

## Option 1 - YAML installation

1. Use the following command to install Contour:

```bash
kubectl apply -f https://projectcontour.io/quickstart/contour.yaml
```

1. Verify the Contour pods are ready:

```bash
kubectl get pods -n projectcontour -o wide
```

You should see the following results:

- Two Contour pods each with status Running and 1/1 Ready.
- One or more Envoy pods, each with the status Running and 2/2 Ready.

## Option 2 - Helm installation

This option requires Helm to be installed locally.

1. Use the following command to add the `bitnami` chart repository that contains the Contour chart:

```bash
helm repo add bitnami https://charts.bitnami.com/bitnami
```

1. Install the Contour chart:

```bash
helm install my-release bitnami/contour --namespace projectcontour --create-namespace
```

1. Verify Contour is ready:

```bash
kubectl -n projectcontour get po,svc
```

You should see the following results:

- One instance of pod/my-release-contour-contour with status Running and 1/1 Ready.
- One or more instances of pod/my-release-contour-envoy with each status Running and 2/2 Ready.
- One instance of service/my-release-contour.
- One instance of service/my-release-contour-envoy.

## Option 3: Contour Gateway Provisioner

The Gateway provisioner watches for the creation of Gateway API Gateway resources, and dynamically provisions Contour and Envoy instances based on the Gateway's spec.

Although the provisioning request itself is made using a Gateway API resource (Gateway), this method of installation still allows you to use any of the supported APIs for defining virtual hosts and routes: Ingress, HTTPProxy, or Gateway API’s HTTPRoute and TLSRoute.

1. Use the following command to deploy the Gateway provisioner:

```bash
kubectl apply -f https://projectcontour.io/quickstart/contour-gateway-provisioner.yaml
```

1. Verify the Gateway provisioner deployment is available:

```bash
kubectl -n projectcontour get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
contour-gateway-provisioner 1/1 1 1 1m
```

1. Create a GatewayClass:

```bash
kubectl apply -f - <<EOF
kind: GatewayClass
apiVersion: gateway.networking.k8s.io/v1
metadata:
name: contour
spec:
controllerName: projectcontour.io/gateway-controller
EOF
```

1. Create a Gateway:

```bash
kubectl apply -f - <<EOF
kind: Gateway
apiVersion: gateway.networking.k8s.io/v1
metadata:
name: contour
namespace: projectcontour
spec:
gatewayClassName: contour
listeners:
- name: http
protocol: HTTP
port: 80
allowedRoutes:
namespaces:
from: All
EOF
```

1. Verify the Gateway is available. It may take up to a minute to become available.

```bash
ubectl -n projectcontour get gateways
NAME CLASS ADDRESS READY AGE
contour contour True 27s
```

1. Verify the Contour pods are ready:

```bash
kubectl -n projectcontour get pods
```

You should see the following results:

- Two Contour pods each with status Running and 1/1 Ready.
- One or move Envoy pods, each with the status Running and 2/2 Ready.

## Test application

Install a web application workload and activate traffic flowing to the backend.

1. Use the following command to install httpbin:

```bash
kubectl apply -f https://projectcontour.io/examples/httpbin.yaml
```

1. Verify the pods and service are ready:

```bash
kubectl get po,svc,ing -l app=httpbin
```

You should see the following:

- Three instances of pods/httpbin, each with status Running and 1/1 Ready.
- One service/httpbin CLUSTER-IP listed on port 80.
- One Ingress on port 80

1. The Helm install configures Contour to filter Ingress and HTTPProxy objects based on the contour IngressClass name. If using Helm, ensure the Ingress has an ingress class of contour with the following command:

```bash
kubectl patch ingress httpbin -p '{"spec":{"ingressClassName": "contour"}}'
```

You you can send some traffic to the sample application, via Contour & Envoy.

For simplicity and compatibility across all platforms use `kubectl port-forward` to get traffic to Envoy, but in a production environment you would typically use the Envoy service’s address.

1. Port-forward from your local machine to the Envoy service:

If using YAML:

```bash
kubectl -n projectcontour port-forward service/envoy 8888:80
```

If using Helm:

```bash
kubectl -n projectcontour port-forward service/my-release-contour-envoy 8888:80
```

If using the Gateway provisioner:

```bash
kubectl -n projectcontour port-forward service/envoy-contour 8888:80
```

In a browser or via curl, make a request to `http://local.projectcontour.io:8888`. The `local.projectcontour.io` URL is a public DNS record resolving to `127.0.0.1` to make use of the forwarded port. You should see the httpbin home page.

## See also

Contour [Getting Started](https://projectcontour.io/getting-started/) documentation.
Comment on lines +28 to +209
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All of these instructions get Contour installed on the cluster, but they don't get net-contour (the Knative component that performs the adapter layer between Knative abstract Routes and Contour HTTPProxy objects) installed. We have some instructions for that in install-serving-with-yaml.md:

    1. Install the Knative Contour controller by running the command:
      ```bash
      kubectl apply -f {{ artifact(repo="net-contour",org="knative-extensions",file="net-contour.yaml")}}
      ```

    1. Configure Knative Serving to use Contour by default by running the command:
      ```bash
      kubectl patch configmap/config-network \
        --namespace knative-serving \
        --type merge \
        --patch '{"data":{"ingress-class":"contour.ingress.networking.knative.dev"}}'
      ```

    1. Fetch the External IP address or CNAME by running the command:

        ```bash
        kubectl --namespace contour-external get service envoy
        ```

        !!! tip
            Save this to use in the following [Configure DNS](#configure-dns) section.

(I removed the step 1 which used a contour.yaml which was published by Knative, because I think the necessary changes have already been contributed upstream.

You may want to document at least the visibility configuration from https://github.com/knative-extensions/net-contour/blob/main/config/config-contour.yaml#L50 as well, though I think most of the configuration for net-contour is inherited from Contour's defaults described in the Contour documentation.

Loading