Skip to content

Commit

Permalink
Update installation manifests and instructions (#234)
Browse files Browse the repository at this point in the history
  • Loading branch information
pleshakov committed Feb 8, 2018
1 parent 6043335 commit 5a4b11c
Show file tree
Hide file tree
Showing 28 changed files with 424 additions and 393 deletions.
157 changes: 157 additions & 0 deletions docs/installation.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,157 @@
# Installing the Ingress Controller

## Prerequisites

Make sure you have access to the Ingress controller image:

* For NGINX Ingress controller, use the image `nginxdemos/nginx-ingress` from [DockerHub](https://hub.docker.com/r/nginxdemos/nginx-ingress/).
* For NGINX Plus Ingress controller, build your own image and push it to your private Docker registry by following the instructions from [here](../nginx-controller).

The installation manifests are located in the [install](../install) folder. In the steps below we assume that you will be running the commands from that folder.

## 1. Create a Namespace, a SA and the Default Secret.

1. Create a namespace and a service account for the Ingress controller:
```
kubectl apply -f common/ns-and-sa.yaml
```

1. Create a secret with a TLS certificate and a key for the default server in NGINX:
```
$ kubectl apply -f common/default-server-secret.yaml
```

**Note**: The default server returns the Not Found page with the 404 status code for all requests for domains for which there are no Ingress rules defined. For testing purposes we include a self-signed certificate and key that we generated. However, we recommend that you use your own certificate and key.

1. *Optional*. Create a config map for customizing NGINX configuration (read more about customization [here](../examples/customization)):
```
$ kubectl apply -f common/nginx-config.yaml
```

## 2. Configure RBAC

If RBAC is enabled in your cluster, create a cluster role and bind it to the service account, created in Step 1:
```
$ kubectl apply -f rbac/rbac.yaml
```

**Note**: To perform this step you must be a cluster admin.

## 3. Deploy the Ingress Controller

We include two options for deploying the Ingress controller:
* *Deployment*. Use a Deployment if you plan to dynamically change the number of Ingress controller replicas.
* *DaemonSet*. Use a DaemonSet for deploying the Ingress controller on every node or a subset of nodes.

### 3.1 Create a Deployment

For NGINX, run:
```
$ kubectl apply -f deployment/nginx-ingress.yaml
```

For NGINX Plus, run:
```
$ kubectl apply -f deployment/nginx-plus-ingress.yaml
```

**Note**: Update the `nginx-plus-ingress.yaml` with the container image that you have built.

Kubernetes will create one Ingress controller pod.


### 3.2 Create a DaemonSet

For NGINX, run:
```
$ kubectl apply -f daemon-set/nginx-ingress.yaml
```

For NGINX Plus, run:
```
$ kubectl apply -f daemon-set/nginx-plus-ingress.yaml
```

**Note**: Update the `nginx-plus-ingress.yaml` with the container image that you have built.

Kubernetes will create an Ingress controller pod on every node of the cluster. Read [this doc](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/) to learn how to run the Ingress controller on a subset of nodes, instead of every node of the cluster.

### 3.3 Check that the Ingress Controller is Running

Run the following command to make sure that the Ingress controller pods are running:
```
$ kubectl get pods --namespace=nginx-ingress
```

## 4. Get Access to the Ingress Controller

**If you created a daemonset**, ports 80 and 443 of the Ingress controller container are mapped to the same ports of the node where the container is running. To access the Ingress controller, use those ports and an IP address of any node of the cluster where the Ingress controller is running.

**If you created a deployment**, below are two options for accessing the Ingress controller pods.

### 4.1 Service with the Type NodePort

Create a service with the type *NodePort*:
```
$ kubectl create -f service/nodeport.yaml
```
Kubernetes will allocate two ports on every node of the cluster. To access the Ingress controller, use an IP address of any node of the cluster along with two allocated ports. Read more about the type NodePort [here](https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport).

### 4.2 Service with the Type LoadBalancer

Create a service with the type *LoadBalancer*. Kubernetes will allocate and configure a cloud load balancer for load balancing the Ingress controller pods.

Create a service using a manifest for your cloud provider:
* For GCP or Azure, run:
```
$ kubectl apply -f service/loadbalancer.yaml
```
* For AWS, run:
```
$ kubectl apply -f service/loadbalancer-aws.yaml
```
Kubernetes will allocate a Classic Load Balancer (ELB) in TCP mode with the PROXY protocol enabled to pass the client's information (the IP address and the port). NGINX must be configured to use the PROXY protocol:
* Add the following keys to the config map file `nginx-config.yaml` from the Step 1 :
```
proxy-protocol: "True"
real-ip-header: "proxy_protocol"
set-real-ip-from: "0.0.0.0/0"
```
* Update the config map:
```
kubectl apply -f common/nginx-config.yaml
```
**Note**: For AWS, additional options regarding an allocated load balancer are available, such as the type of a load balancer and SSL termination. Read [this doc](https://kubernetes.io/docs/concepts/services-networking/service/#type-loadbalancer) to learn more.

Use the public IP of the load balancer to access the Ingress controller. To get the public IP:
* For GCP or Azure, run:
```
$ kubectl get svc nginx-ingress --namespace=nginx-ingress
```
* In case of AWS ELB, the public IP is not reported by kubectl, as the IP addresses of the ELB are not static and you should not rely on them, but rely on the ELB DNS name instead. However, you can use them for testing purposes. To get the DNS name of the ELB, run:
```
$ kubectl describe svc nginx-ingress --namespace=nginx-ingress
```
You can resolve the DNS name into an IP address using `nslookup`:
```
$ nslookup <dns-name>
```

Read more about the type LoadBalancer [here](https://kubernetes.io/docs/concepts/services-networking/service/#type-loadbalancer).

## 5. Access the Live Activity Monitoring Dashboard

For NGINX Plus, you can access the live activity monitoring dashboard:
1. Use `kubectl port-forward` command to forward connections to port 8080 on your local machine to port 8080 of an NGINX Plus Ingress controller pod (replace <nginx-plus-ingress-pod> with the actual name of a pod):
```
$ kubectl port-forward <nginx-plus-ingress-pod> 8080:8080 --namespace=nginx-ingress
```
1. Open your browser at http://127.0.0.1:8080/status.html to access the dashboard.

## Uninstall the Ingress Controller

Delete the `nginx-ingress` namespace to uninstall the Ingress controller along with all the auxiliary resources that were created:
```
$ kubectl delete namespace nginx-ingress
```
112 changes: 23 additions & 89 deletions examples/complete-example/README.md
Original file line number Diff line number Diff line change
@@ -1,43 +1,24 @@
# Example

## Prerequisites

* Kubernetes 1.2 and later (TLS support for Ingress has been added in 1.2)
* For NGINX Plus:
* Build and make available in your cluster the [Ingress controller](../../nginx-controller) image.
* Update the container image field in the ```nginx-plus-ingress-rc.yaml``` file accordingly.
In this example we deploy the NGINX or NGINX Plus Ingress controller, a simple web application and then configure load balancing for that application using the Ingress resource.

## Running the Example

## 1. Deploy the Ingress Controller

1. Create a Secret with an SSL certificate and key for the default server of NGINX/NGINX Plus. The default server returns the Not Found page with the 404 status code for all requests for domains for which there are no Ingress rules defined. It is recommended that you use your own certificate and key.
```
$ kubectl create -f default-server-secret.yaml
```
1. Follow the installation [instructions](../../docs/installation.md) to deploy the Ingress controller.

2. Create an Ingress controller either for NGINX or NGINX Plus:
```
$ kubectl create -f nginx-ingress-rc.yaml
1. Save the public IP address of the Ingress controller into a shell variable:
```
or
$ IC_IP=XXX.YYY.ZZZ.III
```
$ kubectl create -f nginx-plus-ingress-rc.yaml
```

3. The controller container exposes ports 80, 443 (and 8080 for NGINX Plus )
on the host it is running on. Make sure to add a firewall rule to allow incoming traffic
though these ports.

## 2. Deploy the Cafe Application

1. Create the coffee and the tea services and replication controllers:
```
$ kubectl create -f tea-rc.yaml
$ kubectl create -f tea-svc.yaml
$ kubectl create -f coffee-rc.yaml
$ kubectl create -f coffee-svc.yaml
```
Create the coffee and the tea deployments and services:
```
$ kubectl create -f cafe.yaml
```

## 3. Configure Load Balancing

Expand All @@ -46,80 +27,33 @@ though these ports.
$ kubectl create -f cafe-secret.yaml
```

2. Create an Ingress Resource:
2. Create an Ingress resource:
```
$ kubectl create -f cafe-ingress.yaml
```

## 4. Test the Application

1. Find out the external IP address of the node where the controller is running:
```
$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE NODE
coffee-rc-mtjuw 1/1 Running 0 3m kubernetes-minion-iikt
coffee-rc-mu9ns 1/1 Running 0 3m kubernetes-minion-cm0y
nginx-plus-ingress-rc-86kkq 1/1 Running 0 1m kubernetes-minion-iikt
tea-rc-7w3fq 1/1 Running 0 3m kubernetes-minion-iikt
```

```
$ kubectl get node kubernetes-minion-iikt -o json | grep -A 2 ExternalIP
"type": "ExternalIP",
"address": "XXX.YYY.ZZZ.III"
}
```

2. To see that the controller is working, let's curl the coffee and the tea services.
We'll use ```curl```'s --insecure option to turn off certificate verification of our self-signed
1. To access the application, curl the coffee and the tea services. We'll use ```curl```'s --insecure option to turn off certificate verification of our self-signed
certificate and the --resolve option to set the Host header of a request with ```cafe.example.com```

To get coffee:
```
$ curl --resolve cafe.example.com:443:XXX.YYY.ZZZ.III https://cafe.example.com/coffee --insecure
<!DOCTYPE html>
<html>
<head>
<title>Hello from NGINX!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Hello!</h1>
<h2>URI = /coffee</h2>
<h2>My hostname is coffee-rc-mu9ns</h2>
<h2>My address is 10.244.0.3:80</h2>
</body>
</html>
$ curl --resolve cafe.example.com:443:$IC_IP https://cafe.example.com/coffee --insecure
Server address: 10.12.0.18:80
Server name: coffee-7586895968-r26zn
...
```
If your rather prefer tea:
```
$ curl --resolve cafe.example.com:443:XXX.YYY.ZZZ.III https://cafe.example.com/tea --insecure
<!DOCTYPE html>
<html>
<head>
<title>Hello from NGINX!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Hello!</h1>
<h2>URI = /tea</h2>
<h2>My hostname is tea-rc-w7rjr</h2>
<h2>My address is 10.244.0.5:80</h2>
</body>
</html>
$ curl --resolve cafe.example.com:443:$IC_IP https://cafe.example.com/tea --insecure
Server address: 10.12.0.19:80
Server name: tea-7cd44fcb4d-xfw2x
...
```

3. If you're using NGINX Plus, you can open the live activity monitoring dashboard, which is available at http://XXX.YYY.ZZZ.III:8080/status.html
If you go to the Upstream tab, you'll see: ![dashboard](dashboard.png)
**Note**: If you're using a NodePort service to expose the Ingress controller, replace port 443 in the commands above with the node port that corresponds to port 443.

1. If you're using NGINX Plus, you can open the live activity monitoring dashboard:
1. Follow the [instructions](../../docs/installation.md#5-access-the-live-activity-monitoring-dashboard) to access the dashboard.
1. If you go to the Upstream tab, you'll see: ![dashboard](dashboard.png)
66 changes: 66 additions & 0 deletions examples/complete-example/cafe.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,66 @@
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: coffee
spec:
replicas: 2
selector:
matchLabels:
app: coffee
template:
metadata:
labels:
app: coffee
spec:
containers:
- name: coffee
image: nginxdemos/hello:plain-text
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: coffee-svc
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
name: http
selector:
app: coffee
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: tea
spec:
replicas: 3
selector:
matchLabels:
app: tea
template:
metadata:
labels:
app: tea
spec:
containers:
- name: tea
image: nginxdemos/hello:plain-text
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: tea-svc
labels:
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
name: http
selector:
app: tea
16 changes: 0 additions & 16 deletions examples/complete-example/coffee-rc.yaml

This file was deleted.

Loading

0 comments on commit 5a4b11c

Please sign in to comment.