Skip to content

Commit

Permalink
feat(deploy): using Helm for API deployment
Browse files Browse the repository at this point in the history
  • Loading branch information
xmlking committed Apr 5, 2019
1 parent c2b69f4 commit 92add74
Show file tree
Hide file tree
Showing 47 changed files with 1,373 additions and 65 deletions.
2 changes: 2 additions & 0 deletions .deploy/api/Dockerfile
Expand Up @@ -27,6 +27,8 @@ ENV NODE_ENV production
RUN $(npm bin)/ng build api --prod

# Final stage: the running container.
#FROM astefanutti/scratch-node
#COPY --from=busybox:1.30.1 /bin/busybox /bin/busybox
FROM mhart/alpine-node:11

# Import the user and group files from the first stage.
Expand Down
22 changes: 11 additions & 11 deletions .deploy/api/README.md
@@ -1,8 +1,9 @@
API
===
# API

Build and Deploy NgxApi

### Build

```bash
# build
VERSION=1.5.0-SNAPSHOT
Expand All @@ -25,13 +26,18 @@ docker image prune -f
```

### Run

```bash
docker-compose up api
kubectl run -it --rm ngxapi --port 3000 --hostport=3000 --expose=true --image=xmlking/ngxapi --restart=Never --env TYPEORM_HOST=ngxdb-postgresql
# start ngxapi pod in interative mode. Use 'Ctrl+C' to terminate pod and delete temp service.
kubectl run -it --rm ngxapi --port 3000 --hostport=3000 --expose=true --image=xmlking/ngxapi:$VERSION --restart=Never --env TYPEORM_HOST=ngxdb-postgresql
kubectl port-forward ngxapi 3000:3000
#kubectl exec -it ngxapi /bin/busybox sh
kubectl exec -it ngxapi -- /bin/sh

# if you are using `docker-compose` instead of `Kubernetes`
docker-compose up api
# docker run -it --env TYPEORM_HOST=postgres -p 3000:3000 xmlking/ngxapi
# to see ditectory content:
# to see ditectory content: (as we are using scratch container, we dont have any unix commands to interact with)
docker-compose exec api node
docker-compose exec api node -e 'console.log(__dirname);'
docker-compose exec api node -e 'const fs = require('fs'); fs.readdirSync('.').forEach(file => { console.log(file);})
Expand All @@ -51,9 +57,3 @@ curl -v -X GET \
### Deploy
Follow instructions from [manual](./manual) or [helm](./helm) or [OpenShift](./openshift)
### Test API
```bash
curl -X GET "https://ngxapi.traefik.k8s/" -k -H "accept: application/json"
curl -X GET "https://ngxapi.traefik.k8s/echo?sumo=demo" -k -H "accept: application/json"
```
2 changes: 2 additions & 0 deletions .deploy/api/helm/.sops.yaml
@@ -0,0 +1,2 @@
creation_rules:
- pgp: 438F624ADE96A9DE20DE7C5672EB37BA9AB0F7E7
112 changes: 112 additions & 0 deletions .deploy/api/helm/README.md
@@ -0,0 +1,112 @@
# NgxApi Helm

Deploying `NgxApi` to `Kubernetes` via `Helm`

## Prerequisites

1. Helm command line and Tiller backend [installed](../../helm/README.md).
2. `helm-secrets` [installed](../../helm/README.md).

first create plain `secrets.dev.yaml` e.g.,

```yaml
envSecrets:
- name: TYPEORM_PASSWORD
value: postgres321
- name: EMAIL_AUTH_PASS
value: auth_pass
- name: VAPID_PRIVATE_KEY
value: cwh2CYK5h_B_Gobnv8Ym9x61B3qFE2nTeb9BeiZbtMI
```

encrypt before check-in to Git.

```bash
helm secrets enc secrets.dev.yaml
# verify
helm secrets view secrets.dev.yaml
```

## Deploy

### With Tiller

```bash
cd .deploy/api/helm

# To install the chart with the release name `ngxapi`
# `--dry-run --debug` flags help you to see before you really deploy
# use `secrets` plugin for on-the-fly decryption
helm secrets install --name=ngxapi --namespace=default -f values-dev.yaml -f secrets.dev.yaml ./nodeapp

# verify deployment
helm ls
kubectl get all,configmap,secret,ingress,replicasets -lapp.kubernetes.io/instance=ngxapi
kubectl describe pod ngxapi-nodeapp
kubectl get deployment ngxapi-nodeapp -o yaml
kubectl get ingress ngxapi-nodeapp -o yaml

POD_NAME=$(kubectl get pods -lapp.kubernetes.io/instance=ngxapi -o jsonpath='{.items[0].metadata.name}')
kubectl exec -it $POD_NAME -- /bin/sh
kubectl logs $POD_NAME -f
echo | openssl s_client -showcerts -connect ngxapi.traefik.k8s:443 2>/dev/null


# To update
helm secrets upgrade --namespace=default -f values-dev.yaml -f secrets.dev.yaml ngxapi ./nodeapp

# To uninstall/delete the `ngxapi` release
helm delete ngxapi
helm delete ngxapi --purge # delete ngxapi and purge

# Scale to zero
kubectl scale deployment ngxapi-nodeapp --replicas=0
```

### Without Tiller

```bash
cd .deploy/api/helm

helm secrets template ./nodeapp \
--name ngxapi \
--namespace default \
-f values-dev.yaml \
-f secrets.dev.yaml \
--output-dir generated

kubectl apply --recursive -f generated/nodeapp/* --namespace default
```


### Access NgxApi

ngxapi can be accessed:

* Within your cluster, at the following DNS name at port 80:

```
ngxapi-nodeapp.default.svc.cluster.local
```

* From outside the cluster:

```
- https://ngxapi.traefik.k8s
```

* From outside the cluster, run these commands in the same shell: (`when NodePort used in values.yaml`)

```bash
export NODE_PORT=$(kubectl get --namespace default -o jsonpath="{.spec.ports[0].nodePort}" services ngxapi-nodeapp)
export NODE_IP=$(kubectl get nodes --namespace default -o jsonpath="{.items[0].status.addresses[0].address}")
echo http://$NODE_IP:$NODE_PORT
```

## Test
```bash
curl -X GET "https://ngxapi.traefik.k8s/" -k -H "accept: application/json"
curl -X GET "https://ngxapi.traefik.k8s/echo?sumo=demo" -k -H "accept: application/json"
```

## Reference
3 changes: 3 additions & 0 deletions .deploy/api/helm/generated/.gitignore
@@ -0,0 +1,3 @@
*
!.gitignore

22 changes: 22 additions & 0 deletions .deploy/api/helm/nodeapp/.helmignore
@@ -0,0 +1,22 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*~
# Various IDEs
.project
.idea/
*.tmproj
.vscode/
7 changes: 7 additions & 0 deletions .deploy/api/helm/nodeapp/Chart.yaml
@@ -0,0 +1,7 @@
apiVersion: v1
appVersion: "1.0"
description: A Helm chart for Kubernetes
name: nodeapp
sources:
- https://github.com/xmlking/ngx-starter-kit/.deploy/api/helm/nodeapp
version: 0.1.0
134 changes: 134 additions & 0 deletions .deploy/api/helm/nodeapp/README.md
@@ -0,0 +1,134 @@
# Helm Chart Templates for Node.js in Kubernetes

This project provides template Helm Charts for deploying a Node.js web application into any Kubernetes based cloud.

The templates require your application to built into a Docker image. The [Docker Templates](../../Dockerfile) provides assistance in creating an image for your application.

In order to use these template, copy the files from this project into your application directory. You should only need to edit the `Chart.yaml` and `values.yaml` files.

## Prerequisites

Using the template Helm charts assumes the following pre-requisites are complete:

1. You have a Kubernetes cluster
This could be one hosted by a cloud provider or running locally, for example using [Minikube](https://kubernetes.io/docs/setup/minikube/)

2. You have kubectl installed and configured for your cluster
The [Kuberenetes command line](https://kubernetes.io/docs/tasks/tools/install-kubectl/) tool, `kubectl`, is used to view and control your Kubernetes cluster.

3. You have the Helm command line and Tiller backend installed
[Helm and Tiller](https://docs.helm.sh/using_helm/) provide the command line tool and backend service for deploying your application using the Helm chart.

4. You have created and published a Docker image for your application
The Docker Template project provides guidance on [building](../../README.md#build) and [publishing it to the DockerHub registry](../../README.md#build).

5. Your application has a `/live` and `/ready` health check endpoints
This allows Kubernetes to restart your application if it fails or becomes unresponsive.
The [@nestjs/terminu](https://github.com/nestjs/terminu) add-on can be used to add health check endpoints.

## Adding the Chart to your Application

In order to add Helm Charts to your application, copy the `nodeapp` directory from this project into your application's root directory.

You then need to make a single change before the charts are usable: to set the `image.repository` for your application.

### Setting the `image.repository` parameter

In order to change the `image.respository` parameter, open the `nodeapp/values.yaml` file and change the following entry:

```sh
image:
repository: <namespace>/nodeapp
```
to set `<namespace>` to your namespace on DockerHub where you published your application as `nodeapp`.

## Configuring the Chart for your Application

The following table lists the configurable parameters of the template Helm chart and their default values.

`livenessProbe` check if pod is in a bad state, will `restart` pod if probe failed
`readinessProbe` check if service in a healthy state, will remove pod from `service/loadbalancer` if probe failed

| Parameter | Description | Default |
| ----------------------- | --------------------------------------------- | ---------------------------------------------------------- |
| `image.repository` | image repository | `<namespace>/nodeapp` |
| `image.tag` | Image tag | `latest` |
| `image.pullPolicy` | Image pull policy | `Always` |
| `livenessProbe.path` | Liveness Probe `path` | `/live` |
| `livenessProbe.initialDelaySeconds` | How long to wait before beginning the checks our pod(s) are up | 30 |
| `livenessProbe.periodSeconds` | The interval at which we'll check if a pod is running OK before being restarted | 10 |
| `livenessProbe.timeoutSeconds` | if response time is logger than 3 seconds, we consider the check as failed | 3 |
| `livenessProbe.failureThreshold` | if check fails for `N` times in a row, we consider the pod is in a bad state, pod will be restarted | 3 |
| `livenessProbe.successThreshold` | if check succeeds for once, we consider the pod is back to normal | 1 |
| `readinessProbe.path` | Readiness Probe `path` | `/ready` |
| `readinessProbe.initialDelaySeconds` | Start checking after 30s after pod starts. should set to a minimal value such that service able to receive requests as soon as it is ready| `30` |
| `readinessProbe.periodSeconds` | Readiness Probe `timeoutSeconds` | `10` |
| `readinessProbe.timeoutSeconds` | if response time is logger than 3 seconds, we consider the check as failed | 3 |
| `readinessProbe.failureThreshold` | if check fails for `N` times in a row, we consider the pod is in a bad state, pod will be removed from loadbalancer | 3 |
| `readinessProbe.successThreshold` | if check succeeds for once, we consider the pod is back to normal | 1 |
| `service.type` | Kubernetes service type exposing port | `ClusterIP` |
| `service.nodePort` | The node port used if the service is of type `NodePort`| `""` |
| `service.port` | TCP Port for this service | `3000` |
| `resources` | CPU, Memory resource limits | `{}` no default set |
| `autoscaling.enabled` | Enable HorizontalPodAutoscaler | `{}` |
| `networkPolicy.enabled` | Enable NetworkPolicy | `false` |
| `ingress.enabled` | if `true`, an ingress is created | `false` |
| `ingress.annotations` | annotations for the ingress | `{}` |
| `ingress.path` | a list ingress paths | `[/]` |
| `ingress.hosts` | a list of ingress hosts | `[ngxapi.example.com]` |
| `ingress.tls` | a list of [IngressTLS](https://v1-9.docs.kubernetes.io/docs/reference/generated/kubernetes-api/v1.9/#ingresstls-v1beta1-extensions) items | `[]` |
| `metrics.enabled` | add prometheus annotations | `false` |


## Using the Chart to deploy your Application to Kubernetes

In order to use the Helm chart to deploy and verify your applicaton in Kubernetes, run the following commands:

1. From the directory containing `Chart.yaml`, run:

```sh
helm install --name ngxapi .
```
This deploys and runs your application in Kubernetes, and prints the following text to the console:

```sh
Congratulations, you have deployed your Node.js Application to Kubernetes using Helm!

To verify your application is running, run the following two commands to set the SAMPLE_NODE_PORT and SAMPPLE_NODE_IP environment variables to the locaton of your application:

export SAMPLE_NODE_PORT=$(kubectl get --namespace default -o jsonpath="{.spec.ports[0].nodePort}" services nodeapp-service)
export SAMPLE_NODE_IP=$(kubectl get nodes --namespace default -o jsonpath="{.items[0].status.addresses[0].address}")

And then open your web browser to http://${SAMPLE_NODE_IP}:${SAMPLE_NODE_PORT} from the command line, eg:

open http://${SAMPLE_NODE_IP}:${SAMPLE_NODE_PORT}
```

2. Copy, paste and run the `export` lines printed to the console
eg:

```sh
export SAMPLE_NODE_PORT=$(kubectl get --namespace default -o jsonpath="{.spec.ports[0].nodePort}" services nodeapp-service)
export SAMPLE_NODE_IP=$(kubectl get nodes --namespace default -o jsonpath="{.items[0].status.addresses[0].address}")
```

3. Open a browser to view your application:
Open your browser to `http://${SAMPLE_NODE_IP}:${SAMPLE_NODE_PORT}` from the command line using:

```sh
open http://${SAMPLE_NODE_IP}:${SAMPLE_NODE_PORT}
```

You application should now be visible in your browser.


## Uninstalling your Application
If you installed your application with:

```sh
helm install --name ngxapi .
```
then you can:

* Find the deployment using `helm list --all` and searching for an entry with the chart name "ngxapi".
* Remove the application with `helm delete --purge ngxapi`.
21 changes: 21 additions & 0 deletions .deploy/api/helm/nodeapp/templates/NOTES.txt
@@ -0,0 +1,21 @@
1. Get the application URL by running these commands:
{{- if .Values.ingress.enabled }}
{{- range $host := .Values.ingress.hosts }}
{{- range .paths }}
http{{ if $.Values.ingress.tls }}s{{ end }}://{{ $host.host }}{{ . }}
{{- end }}
{{- end }}
{{- else if contains "NodePort" .Values.service.type }}
export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ include "nodeapp.fullname" . }})
export NODE_IP=$(kubectl get nodes --namespace {{ .Release.Namespace }} -o jsonpath="{.items[0].status.addresses[0].address}")
echo http://$NODE_IP:$NODE_PORT
{{- else if contains "LoadBalancer" .Values.service.type }}
NOTE: It may take a few minutes for the LoadBalancer IP to be available.
You can watch the status of by running 'kubectl get --namespace {{ .Release.Namespace }} svc -w {{ include "nodeapp.fullname" . }}'
export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ include "nodeapp.fullname" . }} -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
echo http://$SERVICE_IP:{{ .Values.service.port }}
{{- else if contains "ClusterIP" .Values.service.type }}
export POD_NAME=$(kubectl get pods --namespace {{ .Release.Namespace }} -l "app.kubernetes.io/name={{ include "nodeapp.name" . }},app.kubernetes.io/instance={{ .Release.Name }}" -o jsonpath="{.items[0].metadata.name}")
echo "Visit http://127.0.0.1:8080 to use your application"
kubectl port-forward $POD_NAME 8080:3000
{{- end }}
32 changes: 32 additions & 0 deletions .deploy/api/helm/nodeapp/templates/_helpers.tpl
@@ -0,0 +1,32 @@
{{/* vim: set filetype=mustache: */}}
{{/*
Expand the name of the chart.
*/}}
{{- define "nodeapp.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}

{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "nodeapp.fullname" -}}
{{- if .Values.fullnameOverride -}}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- if contains $name .Release.Name -}}
{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- end -}}
{{- end -}}

{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "nodeapp.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}}
{{- end -}}

0 comments on commit 92add74

Please sign in to comment.