Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
119 changes: 59 additions & 60 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,21 +3,17 @@
[![Artifact Hub](https://img.shields.io/endpoint?url=https://artifacthub.io/badge/repository/smarter)](https://artifacthub.io/packages/search?repo=smarter)
## This demo makes the following assumptions about your environment

In the case you wish to deploy the demo we assume you have done the following:
- You should have a cloud-based k3s master dedicated for edge deployment (we will refer to this as k3s-edge-master) before proceeding any further
- if you don't have a k3s-edge-master, you can follow [these instructions](./k3s-edge-master.md)
In this guide we assume you have done the following:
- You should have a cloud-based k3s server dedicated for edge deployment (we will refer to this as k3s-edge-server) before proceeding any further
- if you don't have a k3s-edge-server, you can follow [these instructions](./k3s-edge-server.md)
- You should also have an installed InfluxDB and Grafana instance in a separate kubernetes cluster
- these may be installed on a second cloud node, with its own k3s master, we will refer to this as the cloud-data-node
- if you don't have a cloud-data-node, you can follow [these instructions](./cloud-data-node.md)
- You will also need an installed k3s edge node which has already been setup to talk to k3s-edge-master
- instructions for installing a SMARTER image on Xavier AGX 16GB or Rpi4 are available [here](http://gitlab.com/arm-research/smarter/smarter-yocto)
- instructions for registering an arbitrary arm64 node running a **64 bit kernel and user space with docker installed** are available [here](./k3s-edge-master.md) under the section `Joining a non-yocto k3s node`
- You will need a KUBECONFIG that is authenticated against the k3s-edge-master on the Dev machine (where you intend to run these commands)
- Using our provided node images, your nodes should automatically register with the edge k3s master. You can verify this by running `kubectl get nodes -o wide`
- these may be installed on a second cloud node, with its own k3s server, we will refer to this as the cloud-data-node
- if you don't have a cloud-data-node, you can follow [these instructions](./k3s-cloud-server.md)
- You will also need an installed k3s edge node which has already been setup to talk to k3s-edge-server
- instructions for registering a node running a **64 bit kernel and user space** are available [here](./k3s-edge-server.md#Joining a k3s edge node to the cluster)

**Hardware:**
- Xavier AGX or Raspberry Pi 4 using our [Smarter Yocto Images](http://gitlab.com/arm-research/smarter/smarter-yocto) (release > v0.6.4.1)
- Rpi4 4GB running Ubuntu 19.10 (can be provisioned using smarter edge setup convenience script found in the scripts directory) or Xavier AGX 16GB running L4T 32.4.3 provided by the jetpack 4.4 release. Others have demonstrated this stack working on Nvidia Nano and Nvidia Xavier NX, but our team does not test on these platforms. Any Arm based device running a **64 bit kernel and user space with docker installed** should work. For instructions on how to register a **non-yocto** node, you can follow [these instructions](./k3s-edge-master.md) under the section `Joining a non-yocto k3s node`. Note that **Ubuntu 20.04** on the RPI 4 will **not** work, please use **19.10**
- Rpi4 4GB running any debian based OS or Xavier AGX 16GB running L4T 32.4.3 provided by the jetpack 4.4 release. Others have demonstrated this stack working on Nvidia Nano and Nvidia Xavier NX, but our team does not test on these platforms. Any Arm based device running a **64 bit kernel and user space** should work.
- PS3 Eye Camera (or Linux compatible web cam with audio) serving both audio and video data (other USB cameras with microphones may work). Microphone **MUST** support 16KHz sampling rate.
- A development machine (your work machine) setup to issue kubectl commands to your edge k3s cluster
- (optional) PMS7003 Air Quality Sensor connected over Serial USB to USB port
Expand All @@ -27,55 +23,58 @@ In the case you wish to deploy the demo we assume you have done the following:
- Dev machine running kubectl client 1.25
- git, curl must also be installed
- K3s server version 1.25
- Node running docker > 18.09

**Connectivity:**
- You must be able to reach your node via IP on ports `22`(ssh) and `2520`(Webserver) from your dev machine for parts of this demo to work
- The node must be able to reach your k3s-edge-master and cloud-data-node via IP
- You must be able to reach your edge node via IP on ports `22`(ssh) and `2520`(Webserver) from your dev machine for parts of this demo to work
- The node must be able to reach your k3s-edge-server and cloud-data-node via IP

## Smarter k3s server configuration
## Deploy demo
- To deploy the base system components common to all edge nodes, as well as the demo applications, we opt to use **Helm v3**. To install helm on the device which you are managing your k3s edge cluster with, you can follow the guide [here](https://helm.sh/docs/intro/install/#from-script).
- Ensure in your environment that your kubeconfig is set properly. As a quick sanity check you can run:
```bash
kubectl cluster-info
```
and you should get a message: `Kubernetes control plane is running at https://<k3s edge server ip>:<k3s edge server port`
- Tell helm to add the smarter repo, such that you can deploy our charts:
```bash
helm repo add smarter https://smarter-project.github.io/documentation
```
- Use the helm chart on https://github.com/smarter-project/documentation/chart to install CNI, DNS and device-manager. This can be done by
```bash
helm install my-smarter-edge smarter/smarter-edge --wait
```
- With the smarter-edge chart installed, you can verify that all the base pods are ready by running:
```bash
kubectl get pods -A -o wide
```
- Now we deploy our demo by first applying the helm chart for the demo:
```bash
helm install my-smarter-demo smarter/smarter-demo --namespace smarter --create-namespace
```
- At this point applications will be installed into the cluster, but no pods will come up as running, as the nodes rely on node labels to be set for the application pods to run.
- Label you nodes by running the following:
```bash
export NODE_NAME=<your node name>
kubectl label node $NODE_NAME smarter-fluent-bit=enabled
kubectl label node $NODE_NAME smarter-gstreamer=enabled
kubectl label node $NODE_NAME smarter-pulseaudio=enabled
kubectl label node $NODE_NAME smarter-inference=enabled
kubectl label node $NODE_NAME smarter-image-detector=enabled
kubectl label node $NODE_NAME smarter-audio-client=enabled
```
- At this point all on your target node you should see each of the above workloads running once the node has pulled down the images. You can monitor your cluster as each pod becomes ready by running:
```bash
kubectl get pods -A -w
```
- With all nodes running successfully, if you are on the same network as your edge node, you can navigate a browser to the IP of the edge node, and see the image detector running on your camera feed in real time.
- To terminate the demo, you can simply unlabel the node for each workload:
```bash
export NODE_NAME=<your node name>
kubectl label node $NODE_NAME smarter-fluent-bit-
kubectl label node $NODE_NAME smarter-gstreamer-
kubectl label node $NODE_NAME smarter-pulseaudio-
kubectl label node $NODE_NAME smarter-inference-
kubectl label node $NODE_NAME smarter-image-detector-
kubectl label node $NODE_NAME smarter-audio-client-
```

- Use the helm chart on https://gitlab.com/smarter-project/documentation/chart to install CNI, DNS and device-manager
```bash
helm install --namespace smarter --create-namespace smarter-edge chart
```
- Use the helm chart on each of the modules. Remember to use the namespace and the correct labels. The individual charts do not install on devices automatically, they require labels.

## To setup your registered edge node from your development machine
Plugin USB camera. You should be able to see the camera at `/dev/video0`.

The demo assumes that your microphone is assigned to card 2 device 0. On Jetson platforms the first usb microphone is automatically assigned to card 2 device 0, however on the **non-yocto** rpi4 devices this is not the default for instance. To fix this you must create the file `/etc/modprobe.d/alsa-base.conf` with the contents:
```
options snd_usb_audio index=2,3
options snd_usb_audio id="Mic1","Mic2"
```

On the rpi4 with **Ubuntu**, you must also append the text `cgroup_memory=1 cgroup_enable=memory` to the file:
```
- `/boot/firmware/nobtcmd.txt` if Ubuntu 19.10
- `/boot/firmware/cmdline.txt` if Ubuntu 20.04
```

Do not install docker using snap with **Ubuntu** instead install by running:
```bash
sudo apt update && sudo apt install docker.io
```

Then reboot the system.

If you are running on a **Xavier**(on the **non-yocto** build), **Xavier NX**, or a **Nano**, open the file `/etc/docker/daemon.json` on the device and ensure that the default runtime is set to nvidia. The file should look as follows:
```bash
{
"default-runtime": "nvidia",
"runtimes": {
"nvidia": {
"path": "nvidia-container-runtime",
"runtimeArgs": []
}
}
}
```

For Single Tenant deployment instructions read [Here](./SingleTenantREADME.md)

For Virtual Tenant deployment instructions read [Here](./VirtualTenantREADME.md)
22 changes: 11 additions & 11 deletions k3s-cloud-master.md → k3s-cloud-server.md
Original file line number Diff line number Diff line change
@@ -1,18 +1,18 @@
# Overview
This document will help you run a Smarter k3s master
This document will help you run a Smarter k3s server

# Running on docker

## System requirements

### k3s cloud master
### k3s cloud server
* Local linux box, AWS EC2 VM instance or Google Cloud Platform GCE VM instance
* OS: Ubuntu 18.04 or later
* Architecture: aarch64 or amd64
* CPU: at least 1vcpu
* RAM: At least 3.75GB
* Storage: At least 10GB
* Multiple k3s cloud masters can be run in a single server if different server ports are used (HOSTPORT).
* Multiple k3s cloud servers can be run in a single server if different server ports are used (HOSTPORT).

### EKS or equivalent
* A k8s equivalent cluster
Expand All @@ -27,13 +27,13 @@ This document will help you run a Smarter k3s master

Make sure you open the ports from the k3s cloud cluster that edge devices need to access. The k3s server port should also be open to enable control of the k3s server

## Setting k3s master up
## Setting k3s server up

[k3s](https://github.com/k3s-io/k3s) repository and [Rancher docker hub](https://hub.docker.com/r/rancher/k3s/) provide docker images and artifacts (k3s) allowing k3s to run as container.
This repository provides the file [k3s-cloud-start.sh](https://gitlab.com/smarter-project/documentation.git/public/scripts/k3s-cloud-start.sh) that automates that process and runs a k3s suitable to be a cloud k3s master
This repository provides the file [k3s-cloud-start.sh](./scripts/k3s-cloud-start.sh) that automates that process and runs a k3s suitable to be a cloud k3s server
Execute the following command to download the file:
```
wget https://gitlab.com/smarter-project/documentation.git/public/scripts/k3s-cloud-start.sh
wget https://raw.githubusercontent.com/smarter-project/documentation/main/scripts/k3s-cloud-start.sh
```

A few options should be set on the script either by environment variables or editing the script.
Expand All @@ -43,11 +43,11 @@ execute the script:
./k3s-cloud-start.sh
```

The script will create another local script that can be used to restart k3s if necessary, the script is called start_k3s_\<instance name\>.sh.
The files token.\<instance name\> and kube.\<instance name\>.config contains the credentials to be use to authenticate a node (token file) or kubectl (kube.config file).
The script will create another local script that can be used to restart k3s if necessary, the script is called `start_k3s_<instance name>.sh`.
The files `token.<instance name>` and `kube.<instance name>.config` contains the credentials to be use to authenticate a node (token file) or kubectl (kube.config file).
*NOTE*: Is is important K3S_VERSION on client matches the server otherwise things are likely not to work
The k3s-start.sh downloads a compatible k3s executable (that can replace kubectl) with the server and also creates a kubectl-\<instance name\>.sh script that emulates a kubectl with the correct credentials.
The file env-\<instance name\>.sh create an alias for kubectl and adds the KUBECONFIG enviroment variable.
The `k3s-start.sh` downloads a compatible k3s executable (that can replace kubectl) with the server and also creates a `kubectl-<instance name>.sh` script that emulates a kubectl with the correct credentials.
The file `env-<instance name>.sh` creates an alias for kubectl and adds the KUBECONFIG enviroment variable.

# Joining a k3s node
To join an node which does not use our yocto build. Copy the kube_cloud_install-\<instance name\>.sh to the node and execute it. The script is already configured to connect to the server \<instance name\>.
To join an node to the cloud cluster, the `kube_cloud_install-<instance name>.sh` to the node and execute it. The script is already configured to connect to the server `<instance name>`.
57 changes: 0 additions & 57 deletions k3s-edge-master.md

This file was deleted.

Loading