Skip to content

konstellation-io/kre

Repository files navigation

KRE (Konstellation Runtime Engine)

Konstellation Runtime Engine is an application that allow to run AI/ML models for inference based on the content of a .krt file.

Engine

Component Coverage Bugs Maintainability Rating Go report
Admin UI coverage bugs mr -
Admin API coverage bugs mr report
K8s Manager coverage bugs mr report
NATS Manager coverage bugs mr report

Runtime

Component Coverage Bugs Maintainability Rating Go report
Mongo Writer coverage bugs mr report

Runners

Each language has a specialized runner associated with it. They are located at the kre-runners repo. You must clone that repository in a folder named runners at the root level inside this repository.

Helm Chart

Refer to chart's README.

Architecture

KRE design is based on a microservice pattern to be run on top of a Kubernetes cluster.

The following diagram shows the main components and how they relate with each other.

Architecture

Below are described the main concepts of KRE.

Engine

Before installing KRE an already existing Kubernetes namespace is required. It will be named kre by convention, but feel free to use whatever you like. The installation process will deploy some components that are responsible of managing the full lifecycle of this AI solution.

The Engine is composed of the following components:

KRT

Konstellation Runtime Transport is a compressed file containing the definition of a runtime version, including the code that must be executed, and a YAML file called kre.yaml describing the desired workflows definitions.

The generic structure of a kre.yaml is as follows:

version: my-project-v1
description: This is the new version that solves some problems.
entrypoint:
  proto: public_input.proto
  image: konstellation/kre-runtime-entrypoint:latest

config:
  variables:
    - API_KEY
    - API_SECRET
  files:
    - HTTPS_CERT

nodes:
  - name: ETL
    image: konstellation/kre-py:latest
    src: src/etl/execute_etl.py

  - name: Execute DL Model
    image: konstellation/kre-py:latest
    src: src/execute_model/execute_model.py

  - name: Create Output
    image: konstellation/kre-py:latest
    src: src/output/output.py

  - name: Client Metrics
    image: konstellation/kre-py:latest
    src: src/client_metrics/client_metrics.py

workflows:
  - name: New prediction
    entrypoint: MakePrediction
    sequential:
      - ETL
      - Execute DL Model
      - Create Output
  - name: Save Client Metrics
    entrypoint: SaveClientMetric
    sequential:
      - Client Metrics

Development

Requirements

In order to start development on this project you will need these tools:

  • gettext: OS package to fill templates during deployment
  • minikube: the local version of Kubernetes to deploy KRE
  • helm: K8s package manager. Make sure you have v3+
  • helm-docs: Helm doc auto-generation tool
  • yq: YAML processor. Make sure you have v4+
  • pre-commit: Pre-commit hooks execution tool ensures the best practices are followed before commiting any change

Pre-commit hooks setup

From the repository root execute the following commands:

pre-commit install
pre-commit install-hooks

Note: Contributing commits that had not passed the required hooks will be rejected.

Local Environment

Requirements

  • Minikube >= 1.26
  • Docker >= 18.9, if used as driver for Minikube. Check this for a complete list of drivers for Minikube

Basic usage

This repo contains a tool called ./krectl.sh to handle common actions you will need during development.

All the configuration needed to run KRE locally can be found in .krectl.conf file. Usually you'd be ok with the default values. Check Minikube's parameters if you need to tweak the resources assigned to it.

Run help to get info for each command:

$> krectl.sh [command] --help

// Outputs:

  krectl.sh -- a tool to manage KRE environment during development.

  syntax: krectl.sh <command> [options]

    commands:
      dev     creates a complete local environment and auto-login to frontend.
      start   starts minikube kre profile.
      stop    stops minikube kre profile.
      login   creates a login URL and open your browser automatically on the admin page.
      build   calls docker to build all images inside minikube.
      deploy  calls helm to create install/upgrade a kre release on minikube.
      delete  calls kubectl to remove runtimes or versions.

    global options:
      h     prints this help.
      v     verbose mode.

Install local environment

To install KRE in your local environment:

$ ./krectl.sh dev

It will install everything in the namespace specified in your development .krectl.conf file.

Login to local environment

First, remember to edit your /etc/hosts, see ./krectl.sh dev output for more details.

NOTE: If you have the hostctl tool installed, updating /etc/hosts will be done automatically too.

Now you can access the admin UI visiting the login URL that will be opened automatically by executing the following script:

$ ./krectl.sh login [--new]

You will see an output like this:

⏳ Calling Admin API...

 Login done. Open your browser at:

 🌎 http://admin.kre.local/signin/c7d024eb-ce35-4328-961a-7d2b79ee8988

βœ”οΈ  Done.

Versioning lifecycle

There are three stages in the development lifecycle of KRE there are three main stages depending on if we are going to add a new feature, release a new version with some features or apply a fix to a current release.

Alphas

To add new features just create a feature branch from main, and after merging the Pull Request a workflow will run the tests. If all tests pass, a new alpha tag will be created (e.g v0.0-alpha.0), and a new release will be generated from this tag.

Releases

After releasing a number of alpha versions, you would want to create a release version. This process must be triggered with the Release workflow, that is a manual process. This workflow will create a new release branch and a new tag following the pattern v0.0.0. Along this tag, a new release will be created.

Fixes

If you find out a bug in a release, you can apply a bugfix just by creating a fix branch from the specific release branch, and create a Pull Request towards the same release branch. When merged, the tests will be run against it, and after passing all the tests, a new fix tag will be created increasing the patch portion of the version, and a new release will be build and released.