diff --git a/README.md b/README.md index 72de3db..9a3bace 100644 --- a/README.md +++ b/README.md @@ -3,21 +3,17 @@ [![Artifact Hub](https://img.shields.io/endpoint?url=https://artifacthub.io/badge/repository/smarter)](https://artifacthub.io/packages/search?repo=smarter) ## This demo makes the following assumptions about your environment -In the case you wish to deploy the demo we assume you have done the following: -- You should have a cloud-based k3s master dedicated for edge deployment (we will refer to this as k3s-edge-master) before proceeding any further - - if you don't have a k3s-edge-master, you can follow [these instructions](./k3s-edge-master.md) +In this guide we assume you have done the following: +- You should have a cloud-based k3s server dedicated for edge deployment (we will refer to this as k3s-edge-server) before proceeding any further + - if you don't have a k3s-edge-server, you can follow [these instructions](./k3s-edge-server.md) - You should also have an installed InfluxDB and Grafana instance in a separate kubernetes cluster - - these may be installed on a second cloud node, with its own k3s master, we will refer to this as the cloud-data-node - - if you don't have a cloud-data-node, you can follow [these instructions](./cloud-data-node.md) -- You will also need an installed k3s edge node which has already been setup to talk to k3s-edge-master - - instructions for installing a SMARTER image on Xavier AGX 16GB or Rpi4 are available [here](http://gitlab.com/arm-research/smarter/smarter-yocto) - - instructions for registering an arbitrary arm64 node running a **64 bit kernel and user space with docker installed** are available [here](./k3s-edge-master.md) under the section `Joining a non-yocto k3s node` -- You will need a KUBECONFIG that is authenticated against the k3s-edge-master on the Dev machine (where you intend to run these commands) -- Using our provided node images, your nodes should automatically register with the edge k3s master. You can verify this by running `kubectl get nodes -o wide` + - these may be installed on a second cloud node, with its own k3s server, we will refer to this as the cloud-data-node + - if you don't have a cloud-data-node, you can follow [these instructions](./k3s-cloud-server.md) +- You will also need an installed k3s edge node which has already been setup to talk to k3s-edge-server + - instructions for registering a node running a **64 bit kernel and user space** are available [here](./k3s-edge-server.md#Joining a k3s edge node to the cluster) **Hardware:** -- Xavier AGX or Raspberry Pi 4 using our [Smarter Yocto Images](http://gitlab.com/arm-research/smarter/smarter-yocto) (release > v0.6.4.1) -- Rpi4 4GB running Ubuntu 19.10 (can be provisioned using smarter edge setup convenience script found in the scripts directory) or Xavier AGX 16GB running L4T 32.4.3 provided by the jetpack 4.4 release. Others have demonstrated this stack working on Nvidia Nano and Nvidia Xavier NX, but our team does not test on these platforms. Any Arm based device running a **64 bit kernel and user space with docker installed** should work. For instructions on how to register a **non-yocto** node, you can follow [these instructions](./k3s-edge-master.md) under the section `Joining a non-yocto k3s node`. Note that **Ubuntu 20.04** on the RPI 4 will **not** work, please use **19.10** +- Rpi4 4GB running any debian based OS or Xavier AGX 16GB running L4T 32.4.3 provided by the jetpack 4.4 release. Others have demonstrated this stack working on Nvidia Nano and Nvidia Xavier NX, but our team does not test on these platforms. Any Arm based device running a **64 bit kernel and user space** should work. - PS3 Eye Camera (or Linux compatible web cam with audio) serving both audio and video data (other USB cameras with microphones may work). Microphone **MUST** support 16KHz sampling rate. - A development machine (your work machine) setup to issue kubectl commands to your edge k3s cluster - (optional) PMS7003 Air Quality Sensor connected over Serial USB to USB port @@ -27,55 +23,58 @@ In the case you wish to deploy the demo we assume you have done the following: - Dev machine running kubectl client 1.25 - git, curl must also be installed - K3s server version 1.25 -- Node running docker > 18.09 **Connectivity:** -- You must be able to reach your node via IP on ports `22`(ssh) and `2520`(Webserver) from your dev machine for parts of this demo to work -- The node must be able to reach your k3s-edge-master and cloud-data-node via IP +- You must be able to reach your edge node via IP on ports `22`(ssh) and `2520`(Webserver) from your dev machine for parts of this demo to work +- The node must be able to reach your k3s-edge-server and cloud-data-node via IP -## Smarter k3s server configuration +## Deploy demo +- To deploy the base system components common to all edge nodes, as well as the demo applications, we opt to use **Helm v3**. To install helm on the device which you are managing your k3s edge cluster with, you can follow the guide [here](https://helm.sh/docs/intro/install/#from-script). +- Ensure in your environment that your kubeconfig is set properly. As a quick sanity check you can run: + ```bash + kubectl cluster-info + ``` + and you should get a message: `Kubernetes control plane is running at https://: + kubectl label node $NODE_NAME smarter-fluent-bit=enabled + kubectl label node $NODE_NAME smarter-gstreamer=enabled + kubectl label node $NODE_NAME smarter-pulseaudio=enabled + kubectl label node $NODE_NAME smarter-inference=enabled + kubectl label node $NODE_NAME smarter-image-detector=enabled + kubectl label node $NODE_NAME smarter-audio-client=enabled + ``` +- At this point all on your target node you should see each of the above workloads running once the node has pulled down the images. You can monitor your cluster as each pod becomes ready by running: + ```bash + kubectl get pods -A -w + ``` +- With all nodes running successfully, if you are on the same network as your edge node, you can navigate a browser to the IP of the edge node, and see the image detector running on your camera feed in real time. +- To terminate the demo, you can simply unlabel the node for each workload: + ```bash + export NODE_NAME= + kubectl label node $NODE_NAME smarter-fluent-bit- + kubectl label node $NODE_NAME smarter-gstreamer- + kubectl label node $NODE_NAME smarter-pulseaudio- + kubectl label node $NODE_NAME smarter-inference- + kubectl label node $NODE_NAME smarter-image-detector- + kubectl label node $NODE_NAME smarter-audio-client- + ``` -- Use the helm chart on https://gitlab.com/smarter-project/documentation/chart to install CNI, DNS and device-manager - ```bash - helm install --namespace smarter --create-namespace smarter-edge chart - ``` -- Use the helm chart on each of the modules. Remember to use the namespace and the correct labels. The individual charts do not install on devices automatically, they require labels. - -## To setup your registered edge node from your development machine -Plugin USB camera. You should be able to see the camera at `/dev/video0`. - -The demo assumes that your microphone is assigned to card 2 device 0. On Jetson platforms the first usb microphone is automatically assigned to card 2 device 0, however on the **non-yocto** rpi4 devices this is not the default for instance. To fix this you must create the file `/etc/modprobe.d/alsa-base.conf` with the contents: -``` -options snd_usb_audio index=2,3 -options snd_usb_audio id="Mic1","Mic2" -``` - -On the rpi4 with **Ubuntu**, you must also append the text `cgroup_memory=1 cgroup_enable=memory` to the file: -``` -- `/boot/firmware/nobtcmd.txt` if Ubuntu 19.10 -- `/boot/firmware/cmdline.txt` if Ubuntu 20.04 -``` - -Do not install docker using snap with **Ubuntu** instead install by running: -```bash -sudo apt update && sudo apt install docker.io -``` - -Then reboot the system. - -If you are running on a **Xavier**(on the **non-yocto** build), **Xavier NX**, or a **Nano**, open the file `/etc/docker/daemon.json` on the device and ensure that the default runtime is set to nvidia. The file should look as follows: -```bash -{ - "default-runtime": "nvidia", - "runtimes": { - "nvidia": { - "path": "nvidia-container-runtime", - "runtimeArgs": [] - } - } -} -``` - -For Single Tenant deployment instructions read [Here](./SingleTenantREADME.md) - -For Virtual Tenant deployment instructions read [Here](./VirtualTenantREADME.md) diff --git a/k3s-cloud-master.md b/k3s-cloud-server.md similarity index 52% rename from k3s-cloud-master.md rename to k3s-cloud-server.md index 58849da..d6de147 100644 --- a/k3s-cloud-master.md +++ b/k3s-cloud-server.md @@ -1,18 +1,18 @@ # Overview -This document will help you run a Smarter k3s master +This document will help you run a Smarter k3s server # Running on docker ## System requirements -### k3s cloud master +### k3s cloud server * Local linux box, AWS EC2 VM instance or Google Cloud Platform GCE VM instance * OS: Ubuntu 18.04 or later * Architecture: aarch64 or amd64 * CPU: at least 1vcpu * RAM: At least 3.75GB * Storage: At least 10GB -* Multiple k3s cloud masters can be run in a single server if different server ports are used (HOSTPORT). +* Multiple k3s cloud servers can be run in a single server if different server ports are used (HOSTPORT). ### EKS or equivalent * A k8s equivalent cluster @@ -27,13 +27,13 @@ This document will help you run a Smarter k3s master Make sure you open the ports from the k3s cloud cluster that edge devices need to access. The k3s server port should also be open to enable control of the k3s server -## Setting k3s master up +## Setting k3s server up [k3s](https://github.com/k3s-io/k3s) repository and [Rancher docker hub](https://hub.docker.com/r/rancher/k3s/) provide docker images and artifacts (k3s) allowing k3s to run as container. -This repository provides the file [k3s-cloud-start.sh](https://gitlab.com/smarter-project/documentation.git/public/scripts/k3s-cloud-start.sh) that automates that process and runs a k3s suitable to be a cloud k3s master +This repository provides the file [k3s-cloud-start.sh](./scripts/k3s-cloud-start.sh) that automates that process and runs a k3s suitable to be a cloud k3s server Execute the following command to download the file: ``` -wget https://gitlab.com/smarter-project/documentation.git/public/scripts/k3s-cloud-start.sh +wget https://raw.githubusercontent.com/smarter-project/documentation/main/scripts/k3s-cloud-start.sh ``` A few options should be set on the script either by environment variables or editing the script. @@ -43,11 +43,11 @@ execute the script: ./k3s-cloud-start.sh ``` -The script will create another local script that can be used to restart k3s if necessary, the script is called start_k3s_\.sh. -The files token.\ and kube.\.config contains the credentials to be use to authenticate a node (token file) or kubectl (kube.config file). +The script will create another local script that can be used to restart k3s if necessary, the script is called `start_k3s_.sh`. +The files `token.` and `kube..config` contains the credentials to be use to authenticate a node (token file) or kubectl (kube.config file). *NOTE*: Is is important K3S_VERSION on client matches the server otherwise things are likely not to work -The k3s-start.sh downloads a compatible k3s executable (that can replace kubectl) with the server and also creates a kubectl-\.sh script that emulates a kubectl with the correct credentials. -The file env-\.sh create an alias for kubectl and adds the KUBECONFIG enviroment variable. +The `k3s-start.sh` downloads a compatible k3s executable (that can replace kubectl) with the server and also creates a `kubectl-.sh` script that emulates a kubectl with the correct credentials. +The file `env-.sh` creates an alias for kubectl and adds the KUBECONFIG enviroment variable. # Joining a k3s node -To join an node which does not use our yocto build. Copy the kube_cloud_install-\.sh to the node and execute it. The script is already configured to connect to the server \. +To join an node to the cloud cluster, the `kube_cloud_install-.sh` to the node and execute it. The script is already configured to connect to the server ``. diff --git a/k3s-edge-master.md b/k3s-edge-master.md deleted file mode 100644 index a58c42a..0000000 --- a/k3s-edge-master.md +++ /dev/null @@ -1,57 +0,0 @@ -# Overview -This document will help you run a Smarter k3s master - -# Running on docker - -## System requirements - -### k3s edge master -* Local linux box, AWS EC2 VM instance or Google Cloud Platform GCE VM instance -* OS: Ubuntu 18.04 -* Architecture: amd64 -* CPU: at least 1vcpu -* RAM: At least 3.75GB -* Storage: At least 10GB - -### k3s edge master -* Local linux (x86_64 or arm64)/windows/MacOS machine with docker, AWS EC2 VM instance or Google Cloud Platform GCE VM instance -* Multiple k3s edge masters can be run in a single server if different server ports are used (HOSTPORT). - -### dev machine -* User's desktop, capable of ssh'ing to the k3s edge master host, it also can be k3s edge master - -## Network topology -* The k3s master host and the dev machine both need access to the Internet. -* The dev machine needs to be able to `ssh` and `scp` into the k3s master host. -* The k3s master needs to have port 6443 (or the port that is desired to run k3s on) open for k3s. -* The edge node needs to have access to port 6443 (or the port that is desired to run k3s on) in the k3s master. - -### Firewall - -Make sure you open port 6443 or the port used in your instance installation in your firewall so external hosts can contact your new master. -On AWS, you will need to do this by editing the security group policy and adding an inbound rule. - -## Setting k3s master up - -[k3s](https://github.com/k3s-io/k3s) repository and [Rancher docker hub](https://hub.docker.com/r/rancher/k3s/) provide docker images and artifacts (k3s) allowing k3s to run as container. -This repository provides the file [k3s-start.sh](https://gitlab.com/smarter-project/documentation.git/public/scripts/k3s-start.sh) that automates that process and runs a k3s suitable to be a SMARTER k3s master -Execute the following command to download the file: -``` -wget https://gitlab.com/smarter-project/documentation.git/public/scripts/k3s-start.sh -``` - -A few options should be set on the script either by environment variables or editing the script. - -execute the script: -``` -./k3s-start.sh -``` - -The script will create another local script that can be used to restart k3s if necessary, the script is called start_k3s_\.sh. -The files token.\ and kube.\.config contains the credentials to be use to authenticate a node (token file) or kubectl (kube.config file). -*NOTE*: Is is important K3S_VERSION on client matches the server otherwise things are likely not to work -The k3s-start.sh downloads a compatible k3s executable (that can replace kubectl) with the server and also creates a kubectl-\.sh script that emulates a kubectl with the correct credentials. -The file env-\.sh create an alias for kubectl and adds the KUBECONFIG enviroment variable. - -# Joining a non-yocto k3s node -To join an node which does not use our yocto build. Copy the kube_edge_install-\.sh to the node and execute it. The script is already configured to connect to the server \. diff --git a/k3s-edge-server.md b/k3s-edge-server.md new file mode 100644 index 0000000..0bb802f --- /dev/null +++ b/k3s-edge-server.md @@ -0,0 +1,91 @@ +# Overview +This document will help you run a Smarter k3s server + +# Running on docker + +## System requirements + +### k3s edge server +* Local linux box, AWS EC2 VM instance or Google Cloud Platform GCE VM instance +* OS: Ubuntu 18.04 or later +* Architecture: amd64 +* CPU: at least 1vcpu +* RAM: At least 3.75GB +* Storage: At least 10GB + +### k3s edge server +* Local linux (x86_64 or arm64)/windows/MacOS machine with docker, AWS EC2 VM instance or Google Cloud Platform GCE VM instance +* Multiple k3s edge servers can be run in a single server if different server ports are used (HOSTPORT). + +### dev machine +* User's desktop, capable of ssh'ing to the k3s edge server host, it also can be k3s edge server + +## Network topology +* The k3s server host and the dev machine both need access to the Internet. +* The dev machine needs to be able to `ssh` and `scp` into the k3s server host. +* The k3s server needs to have port 6443 (or the port that is desired to run k3s on) open for k3s. +* The edge node needs to have access to port 6443 (or the port that is desired to run k3s on) in the k3s server. + +### Firewall + +Make sure you open port 6443 or the port used in your instance installation in your firewall so external hosts can contact your new server. +On AWS, you will need to do this by editing the security group policy and adding an inbound rule. + +## Setting k3s server up + +[k3s](https://github.com/k3s-io/k3s) repository and [Rancher docker hub](https://hub.docker.com/r/rancher/k3s/) provide docker images and artifacts (k3s) allowing k3s to run as container. +This repository provides the file [k3s-start.sh](./scripts/k3s-start.sh) that automates that process and runs a k3s suitable to be a SMARTER k3s server +Execute the following command to download the file: +``` +wget https://raw.githubusercontent.com/smarter-project/documentation/main/scripts/k3s-start.sh +``` + +A few options should be set on the script either by environment variables or editing the script. + +execute the script: +```bash +./k3s-start.sh +``` + +The script will create another local script that can be used to restart k3s if necessary, the script is called `start_k3s_.sh`. +The files `token.` and `kube..config` contains the credentials to be use to authenticate a node (token file) or kubectl (kube.config file). +*NOTE*: Is is important K3S_VERSION on client matches the server otherwise things are likely not to work +The `k3s-start.sh` downloads a compatible k3s executable (that can replace kubectl) with the server and also creates a `kubectl-.sh` script that emulates a kubectl with the correct credentials. +The file `env-.sh` create an alias for kubectl and adds the KUBECONFIG enviroment variable. + +# Joining a k3s edge node to the cluster +The following instructions describe how to setup a node for our demo. + +## Setup your edge nodes and join the edge cluster +Plugin USB camera. You should be able to see the camera at `/dev/videoX`. Take note of which video device the camera is attached to. Most likely it will be `/dev/video0` or `/dev/video1`. + +On the rpi4 with **Ubuntu**, you must also append the text `cgroup_memory=1 cgroup_enable=memory` to the file: +``` +- `/boot/firmware/nobtcmd.txt` if Ubuntu 19.10 +- `/boot/firmware/cmdline.txt` if Ubuntu 20.04 +``` + +On the rpi4 with **64 bit Raspbian**, you must also append the text `cgroup_memory=1 cgroup_enable=memory` to the file: +``` +- `/boot/cmdline.txt` +``` + +Then reboot the system. + +If you are running on a **Xavier**, **Xavier NX**, or a **Nano**, open the file `/etc/docker/daemon.json` on the device and ensure that the default runtime is set to nvidia. The file should look as follows: +```bash +{ + "default-runtime": "nvidia", + "runtimes": { + "nvidia": { + "path": "nvidia-container-runtime", + "runtimeArgs": [] + } + } +} +``` +To join an node to the edge cluster, copy the `kube_edge_install-.sh` script generated from the step above to the node and execute it by simply running: +```bash +./kube_edge_install-.sh +``` +The script is already configured to connect to the server ``. At this point your node will be registered to the cluster, but awaiting the deployment of the base smarter edge infrastructure elements.