Konstellation Runtime Engine is an application that allow to run AI/ML models for inference based on the content of a
.krt
file.
Component | Coverage | Bugs | Maintainability Rating | Go report |
---|---|---|---|---|
Admin UI | - | |||
Admin API | ||||
K8s Manager | ||||
NATS Manager |
Component | Coverage | Bugs | Maintainability Rating | Go report |
---|---|---|---|---|
Mongo Writer |
Each language has a specialized runner associated with it. They are located at
the kre-runners repo. You must clone that repository in a folder
named runners
at the root level inside this repository.
Refer to chart's README.
KRE design is based on a microservice pattern to be run on top of a Kubernetes cluster.
The following diagram shows the main components and how they relate with each other.
Below are described the main concepts of KRE.
Before installing KRE an already existing Kubernetes namespace is required. It will be named kre
by convention, but
feel free to use whatever you like. The installation process will deploy some components that are responsible of
managing the full lifecycle of this AI solution.
The Engine is composed of the following components:
- Admin UI
- Admin API
- K8s Manager
- Mongo Writer
- MongoDB
- NATS-Streaming
Konstellation Runtime Transport is a compressed file containing the definition of a runtime version, including the
code that must be executed, and a YAML file called kre.yaml
describing the desired workflows definitions.
The generic structure of a kre.yaml
is as follows:
version: my-project-v1
description: This is the new version that solves some problems.
entrypoint:
proto: public_input.proto
image: konstellation/kre-runtime-entrypoint:latest
config:
variables:
- API_KEY
- API_SECRET
files:
- HTTPS_CERT
nodes:
- name: ETL
image: konstellation/kre-py:latest
src: src/etl/execute_etl.py
- name: Execute DL Model
image: konstellation/kre-py:latest
src: src/execute_model/execute_model.py
- name: Create Output
image: konstellation/kre-py:latest
src: src/output/output.py
- name: Client Metrics
image: konstellation/kre-py:latest
src: src/client_metrics/client_metrics.py
workflows:
- name: New prediction
entrypoint: MakePrediction
sequential:
- ETL
- Execute DL Model
- Create Output
- name: Save Client Metrics
entrypoint: SaveClientMetric
sequential:
- Client Metrics
In order to start development on this project you will need these tools:
- gettext: OS package to fill templates during deployment
- minikube: the local version of Kubernetes to deploy KRE
- helm: K8s package manager. Make sure you have v3+
- helm-docs: Helm doc auto-generation tool
- yq: YAML processor. Make sure you have v4+
- pre-commit: Pre-commit hooks execution tool ensures the best practices are followed before commiting any change
From the repository root execute the following commands:
pre-commit install
pre-commit install-hooks
Note: Contributing commits that had not passed the required hooks will be rejected.
- Minikube >= 1.26
- Docker >= 18.9, if used as driver for Minikube. Check this for a complete list of drivers for Minikube
This repo contains a tool called ./krectl.sh
to handle common actions you will need during development.
All the configuration needed to run KRE locally can be found in .krectl.conf
file. Usually you'd be ok with the
default values. Check Minikube's parameters if you need to tweak the resources assigned to it.
Run help to get info for each command:
$> krectl.sh [command] --help
// Outputs:
krectl.sh -- a tool to manage KRE environment during development.
syntax: krectl.sh <command> [options]
commands:
dev creates a complete local environment and auto-login to frontend.
start starts minikube kre profile.
stop stops minikube kre profile.
login creates a login URL and open your browser automatically on the admin page.
build calls docker to build all images inside minikube.
deploy calls helm to create install/upgrade a kre release on minikube.
delete calls kubectl to remove runtimes or versions.
global options:
h prints this help.
v verbose mode.
To install KRE in your local environment:
$ ./krectl.sh dev
It will install everything in the namespace specified in your development .krectl.conf
file.
First, remember to edit your /etc/hosts
, see ./krectl.sh dev
output for more details.
NOTE: If you have the hostctl tool installed, updating /etc/hosts
will be
done automatically too.
Now you can access the admin UI visiting the login URL that will be opened automatically by executing the following script:
$ ./krectl.sh login [--new]
You will see an output like this:
β³ Calling Admin API...
Login done. Open your browser at:
π http://admin.kre.local/signin/c7d024eb-ce35-4328-961a-7d2b79ee8988
βοΈ Done.
There are three stages in the development lifecycle of KRE there are three main stages depending on if we are going to add a new feature, release a new version with some features or apply a fix to a current release.
To add new features just create a feature branch from main, and after merging the Pull Request a workflow will run the
tests. If all tests pass, a new alpha
tag will be created (e.g v0.0-alpha.0), and a new release will be generated
from this tag.
After releasing a number of alpha versions, you would want to create a release version. This process must be triggered with the Release workflow, that is a manual process. This workflow will create a new release branch and a new tag following the pattern v0.0.0. Along this tag, a new release will be created.
If you find out a bug in a release, you can apply a bugfix just by creating a fix
branch from the specific release
branch, and create a Pull Request towards the same release branch. When merged, the tests will be run against it, and
after passing all the tests, a new fix tag
will be created increasing the patch portion of the version, and a new
release will be build and released.