Skip to content

Latest commit

 

History

History
132 lines (85 loc) · 5.51 KB

deploy-options.md

File metadata and controls

132 lines (85 loc) · 5.51 KB

Deployment options

The README shows you a simple way to get started with Contour on your cluster. This topic explains the details and shows you additional options. Most of this covers running Contour using a Kubernetes Service of Type: LoadBalancer. If you don't have a cluster with that capability or if you don't want to use it see the Running without a Kubernetes LoadBalancer section below.

Deployment or DaemonSet?

We provide example deployment manifests for setting up Contour by creating either a DaemonSet or a Deployment.

  • The DaemonSet creates a instance of Contour runs on each node in your cluster.
  • The Deployment creates two instances of Contour run on the cluster, on two arbitrary nodes.

In either case, a Service of type: LoadBalancer is set up to forward to the Contour instances.

Install

  • Clone or fork the repository.
  • To install the DaemonSet, navigate to the deployment/ds-json-v1 directory. OR
  • To install the Deployment, navigate to the deployment/deployment-json-v1.

Then run:

kubectl apply -f .

Contour is now deployed. Depending on your cloud provider, it may take some time to configure the load balancer.

Under the hood

Each directory contains four files:

  • 01-common.yaml: Creates the heptio-contour Namespace and a ServiceAccount.
  • 02-rbac.yaml: Creates the RBAC rules for Contour. The Contour RBAC permissions are the minimum required for Contour to operate.
  • 02-contour.yaml: Runs the Contour pods with either the DaemonSet or the Deployment. See Architecture for pod details.
  • 02-service.yaml: Creates the Service object so that Contour can be reached from outside the cluster.

Get your hostname or IP address

To retrieve the IP address or DNS name assigned to your Contour deployment, run:

kubectl get -n heptio-contour service contour -o wide

On AWS, for example, the response looks like:

NAME      CLUSTER-IP     EXTERNAL-IP                                                                    PORT(S)        AGE       SELECTOR
contour   10.106.53.14   a47761ccbb9ce11e7b27f023b7e83d33-2036788482.ap-southeast-2.elb.amazonaws.com   80:30274/TCP   3h        app=contour

Depending on your cloud provider, the EXTERNAL-IP value is an IP address, or, in the case of Amazon AWS, the DNS name of the ELB created for Contour. Keep a record of this value.

On Minikube, to get the IP address of the Contour service run:

minikube service -n heptio-contour contour --url

The response is always an IP address, for example http://192.168.99.100:30588.

Test

The Contour repository contains an example deployment of the Kubernetes Up and Running demo application, kuard. To test your Contour deployment, deploy kuard with the following command:

kubectl apply -f deployment/example-workload/kuard.yaml

Then monitor the progress of the deployment with:

kubectl get po,svc,ing -l app=kuard

You should see something like:

NAME                       READY     STATUS    RESTARTS   AGE
po/kuard-370091993-ps2gf   1/1       Running   0          4m
po/kuard-370091993-r63cm   1/1       Running   0          4m
po/kuard-370091993-t4dqk   1/1       Running   0          4m

NAME        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
svc/kuard   10.110.67.121   <none>        80/TCP    4m

NAME        HOSTS     ADDRESS     PORTS     AGE
ing/kuard   *         10.0.0.47   80        4m

... showing that there are three Pods, one Service, and one Ingress that is bound to all virtual hosts (*).

In your browser, navigate your browser to the IP or DNS address of the Contour Service to interact with the demo application.

Running without a Kubernetes LoadBalancer

If you can't or don't want to use a Service of type: LoadBalancer there are two alternate ways to run Contour.

NodePort Service

If your cluster doesn't have the capability to configure a Kubernetes LoadBalancer, or if you want to configure the load balancer outside Kubernetes, you can change the 02-service.yaml file to set type to NodePort. This will have every node in your cluster listen on the resultant port and forward traffic to Contour. That port can be discovered by taking the second number listed in the PORT column when listing the service, for example 30274 in 80:30274/TCP.

Now you can point your browser at the specified port on any node in your cluster to communicate with Contour.

Host Networking

You can run Contour without a Kubernetes Service at all. This is done by having the Contour pod run with host networking. Do this with with hostNetwork: true on your pod definition. Envoy will listen directly on port 8080 on each host that it is running. This is best paired with a DaemonSet (perhaps paired with Node affinity) to ensure that a single instance of Contour runs on each Node. See the AWS NLB tutorial as an example.

Running Contour in tandem with another ingress controller

If you're running multiple ingress controllers, or running on a cloudprovider that natively handles ingress, you can specify the annotation kubernetes.io/ingress.class: "contour" on all ingresses that you would like Contour to claim. If the kubernetes.io/ingress.class annotation is present with a value other than "contour", Contour will ignore that ingress.

Uninstall Contour

To remove Contour from your cluster, delete the namespace:

% kubectl delete ns heptio-contour