Skip to content
This repository was archived by the owner on Jun 15, 2023. It is now read-only.

A Lightweight Hierarchical Orchestration Framework for Edge Computing

Notifications You must be signed in to change notification settings

oakestra/USENIX-ATC23-Oakestra-Artifacts

 
 

Repository files navigation

Oakestra

Oakestra USENIX ATC 2023 Artifacts

This repository contains the artifacts for the paper:

Oakestra: A Lightweight Hierarchical Orchestration Framework for Edge Computing

Abstract: Edge computing seeks to enable applications with strict latency requirements by utilizing resources deployed in diverse, dynamic, and possibly constrained environments closer to the users. Existing state-of-the-art orchestration frameworks(e.g. Kubernetes) perform poorly at the edge since they were designed for reliable, low latency, high bandwidth cloud environments. We present Oakestra, a hierarchical, lightweight, flexible, and scalable orchestration framework for edge computing. Through its novel federated three-tier resource management, delegated task scheduling, and semantic overlay networking, Oakestra can flexibly consolidate multiple infrastructure providers and support applications over dynamic variations at the edge. Our comprehensive evaluation against the state-of-the-art demonstrates the significant benefits of Oakestra as it achieves an approximately tenfold reduction in resource usage through reduced management overhead and 10% application performance improvement due to lightweight operation over constrained hardware.

Oakestra is an open-source project. Most of the code used for this paper is upstream, or is in the process of being upstreamed.

@inproceedings {Bartolomeo2023,
author = {Bartolomeo, Giovanni and Yosofie, Mehdi and Bäurle, Simon and Haluszczynski, Oliver and Mohan, Nitinder and Ott, Jörg},
title = {{Oakestra}: A Lightweight Hierarchical Orchestration Framework for Edge Computing},
booktitle = {2023 USENIX Annual Technical Conference (USENIX ATC 23)},
year = {2023},
address = {Boston, MA},
url = {https://www.usenix.org/conference/atc23/presentation/bartolomeo},
publisher = {USENIX Association},
month = jul,
}

Artifact Structure

Our artifact repositories for reproducing the experiments and results in the paper are organized as follows.

  1. [This] Main repository: The repository contains the Root & Cluster orchestrators folders, as well as the Node Engine source code for the worker node.

  2. Network repository: This repository contains the Root, Cluster, and Worker network components.

  3. Experiments repository: This repository is a sub-directory within this repo at Experiments/ and includes the setup instructions to create your first Oakestra infrastructure and a set of scripts to automate the results collection procedure and reproduce our results.

  4. [Optional] Dashboard: The repository contains a front-end application that can be used to graphically interact with the platform. Its optional but gives a nice web-based GUI to Oakestra


Getting Started

Set up the infrastructure

We offer three distinct possibilities for setting up Oakestra infrastructure.

I. Single-node Infrastructure

The most straightforward infrastructure setup is the single cluster and single worker node deployment of Oakestra. All the components are deployed inside the same machine; this setup is recommended for testing and development purposes. We provide an Oakestra VM configured like this scenario which you can find here.

Disclaimer: Please bear in mind that some of the experiments, like the stress test \cref{sec:stresstest} and the AR pipeline \cref{sec:arpipeline}, require adequate hardware to run in a single node setup. In this case, we recommend 16 GB of RAM and 16 core CPU.

II. Separate Worker Infrastructure

A worker node is an external machine (machine 2) connected to the control plane (machine 1). This setup is recommended if you want to get started with a small cluster because it enables horizontal scalability by introducing more worker nodes (on multiple Linux machines) attached to the same master (machine 1).

III. Multiple Machine Infrastructure

This is a single cluster setup but with the separation of the Root and Cluster control plane on 2 different machines. This setup is recommended if one foresees a multi-cluster deployment in the future because new machines with new cluster orchestrators and worker nodes can be connected to the root orchestrator deployed in machine 1.

Hardware Requirements

Scenario 1

  • AMD64 x86-64 Machine
    • 8 GB of RAM
    • 50 GB Disk space
    • 8 cores CPU

Scenario 2 & Scenario 3

  • Root Orchestrator

    • 2GB of RAM
    • 2 Core CPU, ARM64 or AMD64 architecture
    • 10GB disk
      • tested on: Ubuntu 20.20, Windows 10, MacOS Monterey
  • Cluster Orchestrator

    • 2GB of RAM
    • 2 Core CPU, ARM64 or AMD64 architecture
    • 5GB disk
      • tested on: Ubuntu 20.20, Windows 10, MacOS Monterey
  • Worker Node

    • Linux-based OS
    • 2 Core CPU, ARM64 or AMD64 architecture
    • 2GB disk
    • iptables

Network requirements

  • Root Orchestrator and Cluster orchestrator must be mutually reachable
  • Cluster orchestrator must be reachable from the Worker node
  • Each worker exposes port 50103 for tunneling
  • Root and cluster orch expose ports in the range: 10000-12000, 80

Step-by-Step Instructions

We have created a detailed README and getting-started guide that provides step-by-step instructions which you can find here. Please follow the instructions provided in Section 4 of the README file.

Networking

Please see Oakestra Net Artficat Repository for setting up networking.

Frontend?

To make your life easier you can run the Oakestra front-end. Please check the Dashboard repository for more information.

Reproducing Experiments

Please see Experiments/ folder for the experiments artifacts. We provide the scripts that we used to obtain the results in our paper using Oakestra. The summary of our tests, their setup time and estimated runtime is as follows.

Test ID Test Description Setup Time Est. Runtime
Test 1 Deployment Time 0h 05m 0h 05m
Test 2 Network Overhead 0h 05m 0h 02m
Test 3 Network Bandwidth 0h 05m 0h 05m
Test 4 Control Messages 0h 01m 0h 05m
Test 5 Stress Test 0h 05m 0h 05m
Test 6 Video Analytics Pipeline 0h 05m 0h 10m

Please follow the instructions provided in Section 5 of the README file here.

Beyond the paper

This repository is recreating our USENIX ATC artifacts and is, therefore, out-of-sync of the main Oakestra development. Please see the main branch of Oakestra for the latest features. Also, check out our detailed wiki for more information, and join our community on Discordhttps://discord.gg/7F8EhYCJDf.

About

A Lightweight Hierarchical Orchestration Framework for Edge Computing

Resources

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Python 80.1%
  • Go 13.3%
  • Shell 5.3%
  • Dockerfile 1.1%
  • JavaScript 0.2%