Skip to content
This repository has been archived by the owner on Jan 17, 2023. It is now read-only.

intel/oneContainer-API

Repository files navigation

DISCONTINUATION OF PROJECT.

This project will no longer be maintained by Intel.

This project has been identified as having known security escapes.

Intel has ceased development and contributions including, but not limited to, maintenance, bug fixes, new releases, or updates, to this project.

Intel no longer accepts patches to this project.

oneContainer API

OneContainer-API is a platform to enable unified APIs for containerized services and backends in multiple segments like AI, Database, and Media.

The framework is backend agnostic and have been tested with Intel® optimized stacks for Deep Learning, the Deep Learning Reference Stack (DLRS) and Data Services, the Data Services Reference Stack (DSRS). For more information on Intel® System Stacks for Linux, checkout, stacks

Installation

The requirements for onecontainer-api are the following:

  • Python >= 3.7
  • poetry >= 0.12
  • docker version 19
  • docker-compose version 1.25

Development environment

To install the dependencies, run the following commands:

$ poetry install

Using the virtual environment created by poetry will make all dependencies available. To start this virtual environment execute the following command:

$ poetry shell

To start onecontainer_api, from CLI (oca), run the following command:

$ poetry run oca launch

This will start a uvicorn web server on port 8000. Execute uvicorn --help for a full list of parameters available.

Documentation

Why?

A lot of services are built and deployed as docker containers, the question or difficulty is how to connect the last mile, how to expose these services to the ISV or user.

This is where oneContainerAPI comes in, by providing a unified interface for ISVs to deploy and consume services.

Features

  • Microservice architecture
  • Unified APIs for service backends from various segments like AI, Media and DB
  • Async queuing for compute / IO intensive requests
  • Backends - Cassandra DB, DLRS PyTorch with Torchub
  • Self-documentation

Architecture

A 10,000 ft view of the architecture of oneContainerAPI:

Various components that form the architecture are described below:

Frontend / External API with service Queue

An ISV uses the frontend API to deploy services for their users to consume. The API is divided in 2 sections: Management API and Service API. The Management API is used to enable backend services to be able to be consumed by the Service API. The Service APIs are used by end users.

M-API and S-API

The API is divided in 2 sections: Management API and Service API.

The Management API or M-API is used to enable the backend services to be able to be consumed by the Service API or S-API. The Service API is used to consume and access deployed backends.

Example of management APIs
  • List services and drivers:

Example of service APIs
  • List service functions for AI and DB:

Backend and drivers

Register a backend stack service

A backend is any containerized service, for example dlrs_pytorch, dbrs_cassandra etc. A driver is any client that is designed as a REST service, that is used to consume a backend service.

Infrastructure management is not in the scope of onecontainer-api; this means that onecontainer-api app is not aware of any backend services unless they are manually recorded in a onecontainer-api database.

To create a record of every backend service you want to consume, you need to execute a POST call to /service/ endpoint with the data required.

To review the API reference, go to docs/api-reference

Register a plugin client driver

When creating a record for a backend service, it will assign a driver if there is one available that can be used to consume the stack.

There is a list of drivers available that is created with onecontainer-api installation, these are what we call native drivers.

Onecontainer-api also supports custom driver clients, these are what we call plugin drivers.

A supported plugin driver complies with the following specification:

  • It must contain a metadata.json to deserialize a DriverBase object. (Please refer to API reference for object formats).
  • It shall be deployed in a way supported by onecontainer-api, the only method available is using Dockerfiles.

If there is no driver available, you will need to create one and then assign it to a stack. The folder templates/ contains the templates to create drivers for each function scope.

Service workflow

How a request is sent to a queue and how the queue worker processes the request and returns it:

Self documentation

Onecontainer-api uses the OpenAPI standard to generate documentation for APIs defined using the FastAPI framework. The updated API docs can be found at docs/api-reference and can be rendered using any OpenAPI frontends. When the application is running, the API docs can be viewed at host:port/docs as well.

Getting started

For a Getting started tutorial, view the Readme.md in the demo directory.

Contributing

We'd love to accept your patches, if you have improvements to stacks, send us your pull requests or if you find any issues, raise an issue. Contributions can be anything from documentation updates to optimizations!

Security Notes

ffmpeg pipeline is serialized using pickle, de-serialization of the data from the database is a possible security issue if the database is un-trusted. We will be looking to fixing this issue in our next release.

Security issues can be reported to Intel's security incident response team via https://intel.com/security.

Mailing List

See our public mailing list page for details on how to contact us. You should only subscribe to the Stacks mailing lists using an email address that you don't mind being public.

About

A platform to enable unified APIs for containerized services in multiple segments like AI, Database, and Media.

Topics

Resources

License

Stars

Watchers

Forks

Packages

No packages published