Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DevOps - Deploy Joystream node network on EKS with Kubernetes and Pulumi #2533

Merged
merged 10 commits into from Aug 2, 2021
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
6 changes: 6 additions & 0 deletions devops/infrastructure/node-network/.gitignore
@@ -0,0 +1,6 @@
/bin/
/node_modules/
kubeconfig.yml
package-lock.json
.env
Pulumi.*.yaml
21 changes: 21 additions & 0 deletions devops/infrastructure/node-network/Pulumi.yaml
@@ -0,0 +1,21 @@
name: node-network
runtime: nodejs
description: Kubernetes IaC for Joystream RPC and Validator nodes
template:
config:
aws:profile:
default: joystream-user
aws:region:
default: us-east-1
isMinikube:
description: Whether you are deploying to minikube
default: false
numberOfValidators:
description: Number of validators as starting nodes
default: 2
networkSuffix:
description: Suffix to attach to the network id and name
default: 8129
isLoadBalancerReady:
description: Whether the load balancer service is ready and has been assigned an IP
default: false
113 changes: 113 additions & 0 deletions devops/infrastructure/node-network/README.md
@@ -0,0 +1,113 @@
# Query Node automated deployment

Deploys a Joystream node network on EKS Kubernetes cluster

## Deploying the App

To deploy your infrastructure, follow the below steps.

### Prerequisites

1. [Install Pulumi](https://www.pulumi.com/docs/get-started/install/)
1. [Install Node.js](https://nodejs.org/en/download/)
1. Install a package manager for Node.js, such as [npm](https://www.npmjs.com/get-npm) or [Yarn](https://yarnpkg.com/en/docs/install).
1. [Configure AWS Credentials](https://www.pulumi.com/docs/intro/cloud-providers/aws/setup/)
1. Optional (for debugging): [Install kubectl](https://kubernetes.io/docs/tasks/tools/)

### Steps

After cloning this repo, from this working directory, run these commands:

1. Install the required Node.js packages:

This installs the dependent packages [needed](https://www.pulumi.com/docs/intro/concepts/how-pulumi-works/) for our Pulumi program.

```bash
$ npm install
```

1. Create a new stack, which is an isolated deployment target for this example:

This will initialize the Pulumi program in TypeScript.

```bash
$ pulumi stack init
```

1. Set the required configuration variables in `Pulumi.<stack>.yaml`

```bash
$ pulumi config set-all --plaintext aws:region=us-east-1 --plaintext aws:profile=joystream-user \
--plaintext numberOfValidators=2 --plaintext isMinikube=true --plaintext networkSuffix=8122
```

If you want to build the stack on AWS set the `isMinikube` config to `false`

```bash
$ pulumi config set isMinikube false
```

1. Stand up the Kubernetes cluster:

Running `pulumi up -y` will deploy the EKS cluster. Note, provisioning a
new EKS cluster takes between 10-15 minutes.

1. Once the stack is up and running, we will modify the Caddy config to get SSL certificate for the load balancer for AWS

Modify the config variable `isLoadBalancerReady`

```bash
$ pulumi config set isLoadBalancerReady true
```

Run `pulumi up -y` to update the Caddy config

1. You can now access the endpoints using `pulumi stack output endpoint1` or `pulumi stack output endpoint2`

The ws-rpc endpoint is `https://<ENDPOINT>/ws-rpc` and http-rpc endpoint is `https://<ENDPOINT>/http-rpc`

1. Access the Kubernetes Cluster using `kubectl`

To access your new Kubernetes cluster using `kubectl`, we need to set up the
`kubeconfig` file and download `kubectl`. We can leverage the Pulumi
stack output in the CLI, as Pulumi facilitates exporting these objects for us.

```bash
$ pulumi stack output kubeconfig --show-secrets > kubeconfig
$ export KUBECONFIG=$PWD/kubeconfig
$ kubectl get nodes
```

We can also use the stack output to query the cluster for our newly created Deployment:

```bash
$ kubectl get deployment $(pulumi stack output deploymentName) --namespace=$(pulumi stack output namespaceName)
$ kubectl get service $(pulumi stack output serviceName) --namespace=$(pulumi stack output namespaceName)
```

To get logs

```bash
$ kubectl config set-context --current --namespace=$(pulumi stack output namespaceName)
$ kubectl get pods
$ kubectl logs <PODNAME> --all-containers
```

To see complete pulumi stack output

```bash
$ pulumi stack output
```

To execute a command

```bash
$ kubectl exec --stdin --tty <PODNAME> -c colossus -- /bin/bash
```

1. Once you've finished experimenting, tear down your stack's resources by destroying and removing it:

```bash
$ pulumi destroy --yes
$ pulumi stack rm --yes
```
135 changes: 135 additions & 0 deletions devops/infrastructure/node-network/caddy.ts
@@ -0,0 +1,135 @@
import * as k8s from '@pulumi/kubernetes'
import * as pulumi from '@pulumi/pulumi'
import * as dns from 'dns'

/**
* ServiceDeployment is an example abstraction that uses a class to fold together the common pattern of a
* Kubernetes Deployment and its associated Service object.
*/
export class CaddyServiceDeployment extends pulumi.ComponentResource {
public readonly deployment: k8s.apps.v1.Deployment
public readonly service: k8s.core.v1.Service
public readonly hostname?: pulumi.Output<string>
public readonly primaryEndpoint?: pulumi.Output<string>
public readonly secondaryEndpoint?: pulumi.Output<string>

constructor(name: string, args: ServiceDeploymentArgs, opts?: pulumi.ComponentResourceOptions) {
super('k8sjs:service:ServiceDeployment', name, {}, opts)

const labels = { app: name }
let volumes: pulumi.Input<pulumi.Input<k8s.types.input.core.v1.Volume>[]> = []
let caddyVolumeMounts: pulumi.Input<pulumi.Input<k8s.types.input.core.v1.VolumeMount>[]> = []

async function lookupPromise(url: string): Promise<dns.LookupAddress[]> {
return new Promise((resolve, reject) => {
dns.lookup(url, { all: true }, (err: any, addresses: dns.LookupAddress[]) => {
if (err) reject(err)
resolve(addresses)
})
})
}

this.service = new k8s.core.v1.Service(
name,
{
metadata: {
name: name,
namespace: args.namespaceName,
labels: labels,
},
spec: {
type: args.isMinikube ? 'NodePort' : 'LoadBalancer',
ports: [
{ name: 'http', port: 80 },
{ name: 'https', port: 443 },
],
selector: labels,
},
},
{ parent: this }
)

this.hostname = this.service.status.loadBalancer.ingress[0].hostname

if (args.lbReady) {
let caddyConfig: pulumi.Output<string>
const lbIps: pulumi.Output<dns.LookupAddress[]> = this.hostname.apply((dnsName) => {
return lookupPromise(dnsName)
})

function getProxyString(ipAddress: pulumi.Output<string>) {
return pulumi.interpolate`${ipAddress}.nip.io/ws-rpc {
reverse_proxy node-network:9944
}

${ipAddress}.nip.io/http-rpc {
reverse_proxy node-network:9933
}
`
}

caddyConfig = pulumi.interpolate`${getProxyString(lbIps[0].address)}
${getProxyString(lbIps[1].address)}`

this.primaryEndpoint = pulumi.interpolate`${lbIps[0].address}.nip.io`
this.secondaryEndpoint = pulumi.interpolate`${lbIps[1].address}.nip.io`

const keyConfig = new k8s.core.v1.ConfigMap(
name,
{
metadata: { namespace: args.namespaceName, labels: labels },
data: { 'fileData': caddyConfig },
},
{ parent: this }
)
const keyConfigName = keyConfig.metadata.apply((m) => m.name)

caddyVolumeMounts.push({
mountPath: '/etc/caddy/Caddyfile',
name: 'caddy-volume',
subPath: 'fileData',
})
volumes.push({
name: 'caddy-volume',
configMap: {
name: keyConfigName,
},
})
}

this.deployment = new k8s.apps.v1.Deployment(
name,
{
metadata: { namespace: args.namespaceName, labels: labels },
spec: {
selector: { matchLabels: labels },
replicas: 1,
template: {
metadata: { labels: labels },
spec: {
containers: [
{
name: 'caddy',
image: 'caddy',
ports: [
{ name: 'caddy-http', containerPort: 80 },
{ name: 'caddy-https', containerPort: 443 },
],
volumeMounts: caddyVolumeMounts,
},
],
volumes,
},
},
},
},
{ parent: this }
)
}
}

export interface ServiceDeploymentArgs {
namespaceName: pulumi.Output<string>
lbReady?: boolean
isMinikube?: boolean
}
29 changes: 29 additions & 0 deletions devops/infrastructure/node-network/configMap.ts
@@ -0,0 +1,29 @@
import * as pulumi from '@pulumi/pulumi'
import * as k8s from '@pulumi/kubernetes'
import * as fs from 'fs'

export class configMapFromFile extends pulumi.ComponentResource {
public readonly configName?: pulumi.Output<string>

constructor(name: string, args: ConfigMapArgs, opts: pulumi.ComponentResourceOptions = {}) {
super('pkg:node-network:configMap', name, {}, opts)

this.configName = new k8s.core.v1.ConfigMap(
name,
{
metadata: {
namespace: args.namespaceName,
},
data: {
'fileData': fs.readFileSync(args.filePath).toString(),
},
},
opts
).metadata.apply((m) => m.name)
}
}

export interface ConfigMapArgs {
filePath: string
namespaceName: pulumi.Output<string>
}